id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
625251
https://en.wikipedia.org/wiki/Endodermis
Endodermis
The endodermis is the innermost layer of cortex in land plants. It is a cylinder of compact living cells, the radial walls of which are impregnated with hydrophobic substances (Casparian strip) to restrict apoplastic flow of water to the inside. The endodermis is the boundary between the cortex and the stele. In many seedless plants, such as ferns, the endodermis is a distinct layer of cells immediately outside the vascular cylinder (stele) in roots and shoots. In most seed plants, especially woody types, the endodermis is present in roots but not in stems. The endodermis helps regulate the movement of water, ions and hormones into and out of the vascular system. It may also store starch, be involved in perception of gravity and protect the plant against toxins moving into the vascular system. Structure The endodermis is developmentally the innermost portion of the cortex. It may consist of a single layer of barrel-shaped cells without any intercellular spaces or sometimes several cell layers. The cells of the endodermis typically have their primary cell walls thickened on four sides radial and transverse with suberin, a water-impermeable waxy substance which in young endodermal cells is deposited in distinctive bands called Casparian strips. These strips vary in width but are typically smaller than the cell wall on which they are deposited. If the endodermis is likened to a brick cylinder (e.g. a smokestack), with the bricks representing individual cells, the Casparian strips are analogous to the mortar between the bricks. In older endodermal cells, suberin may be more extensively deposited on all cell wall surfaces and the cells can become lignified, forming a complete waterproof layer. Some plants have a large number of amyloplasts (starch containing organelles) in their endodermal cells, in which case the endodermis may be called a starch sheath. Endodermis is often made visible with stains like phloroglucinol due to the phenolic and lipid nature of the Casparian strips or by the abundance of amyloplasts. Function The endodermis prevents water, and any solutes dissolved in the water, from passing through this layer via the apoplast pathway. Water can only pass through the endodermis by crossing the membrane of endodermal cells twice (once to enter and a second time to exit). Water moving into or out of the xylem, which is part of the apoplast, can thereby be regulated since it must enter the symplast in the endodermis. This allows the plant to control to some degree the movement of water and to selectively uptake or prevent the passage of ions or other molecules. The endodermis does not allow gas bubbles to enter the xylem and helps prevent embolisms from occurring in the water column. Passage cells are endodermal cells of older roots which have retained thin walls and Casparian strips rather than becoming suberized and waterproof like the other cells around them, to continue to allow some symplastic flow to the inside. Experimental evidence suggests that passage cells function to allow transfer of solutes such as calcium and magnesium into the stele, in order to eventually reach the transpiration system. For the most part, however, old roots seal themselves off at the endodermis, and only serve as a passageway for water and minerals taken up by younger roots "downstream". Endodermal cells may contain starch granules in the form of amyloplasts. These may serve as food storage, and have been shown to be involved in gravitropism in some plants.
Biology and health sciences
Plant tissues
Biology
625404
https://en.wikipedia.org/wiki/Stroke
Stroke
Stroke is a medical condition in which poor blood flow to a part of the brain causes cell death. There are two main types of stroke: ischemic, due to lack of blood flow, and hemorrhagic, due to bleeding. Both cause parts of the brain to stop functioning properly. Signs and symptoms of stroke may include an inability to move or feel on one side of the body, problems understanding or speaking, dizziness, or loss of vision to one side. Signs and symptoms often appear soon after the stroke has occurred. If symptoms last less than 24 hours, the stroke is a transient ischemic attack (TIA), also called a mini-stroke. Hemorrhagic stroke may also be associated with a severe headache. The symptoms of stroke can be permanent. Long-term complications may include pneumonia and loss of bladder control. The most significant risk factor for stroke is high blood pressure. Other risk factors include high blood cholesterol, tobacco smoking, obesity, diabetes mellitus, a previous TIA, end-stage kidney disease, and atrial fibrillation. Ischemic stroke is typically caused by blockage of a blood vessel, though there are also less common causes. Hemorrhagic stroke is caused by either bleeding directly into the brain or into the space between the brain's membranes. Bleeding may occur due to a ruptured brain aneurysm. Diagnosis is typically based on a physical exam and supported by medical imaging such as a CT scan or MRI scan. A CT scan can rule out bleeding, but may not necessarily rule out ischemia, which early on typically does not show up on a CT scan. Other tests such as an electrocardiogram (ECG) and blood tests are done to determine risk factors and possible causes. Low blood sugar may cause similar symptoms. Prevention includes decreasing risk factors, surgery to open up the arteries to the brain in those with problematic carotid narrowing, and anticoagulant medication in people with atrial fibrillation. Aspirin or statins may be recommended by physicians for prevention. Stroke is a medical emergency. Ischemic strokes, if detected within three to four-and-a-half hours, may be treatable with medication that can break down the clot, while hemorrhagic strokes sometimes benefit from surgery. Treatment to attempt recovery of lost function is called stroke rehabilitation, and ideally takes place in a stroke unit; however, these are not available in much of the world. In 2023, 15 million people worldwide had a stroke. In 2021, stroke was the third biggest cause of death, responsible for approximately 10% of total deaths. In 2015, there were about 42.4 million people who had previously had stroke and were still alive. Between 1990 and 2010 the annual incidence of stroke decreased by approximately 10% in the developed world, but increased by 10% in the developing world. In 2015, stroke was the second most frequent cause of death after coronary artery disease, accounting for 6.3 million deaths (11% of the total). About 3.0 million deaths resulted from ischemic stroke while 3.3 million deaths resulted from hemorrhagic stroke. About half of people who have had a stroke live less than one year. Overall, two thirds of cases of stroke occurred in those over 65 years old. Classification Stroke can be classified into two major categories: ischemic and hemorrhagic. Ischemic stroke is caused by interruption of the blood supply to the brain, while hemorrhagic stroke results from the rupture of a blood vessel or an abnormal vascular structure. About 87% of stroke is ischemic, with the rest being hemorrhagic. Bleeding can develop inside areas of ischemia, a condition known as "hemorrhagic transformation." It is unknown how many cases of hemorrhagic stroke actually start as ischemic stroke. Definition In the 1970s the World Health Organization defined "stroke" as a "neurological deficit of cerebrovascular cause that persists beyond 24 hours or is interrupted by death within 24 hours", although the word "stroke" is centuries old. This definition was supposed to reflect the reversibility of tissue damage and was devised for the purpose, with the time frame of 24 hours being chosen arbitrarily. The 24-hour limit divides stroke from transient ischemic attack, which is a related syndrome of stroke symptoms that resolve completely within 24 hours. With the availability of treatments that can reduce stroke severity when given early, many now prefer alternative terminology, such as "brain attack" and "acute ischemic cerebrovascular syndrome" (modeled after heart attack and acute coronary syndrome, respectively), to reflect the urgency of stroke symptoms and the need to act swiftly. Ischemic During ischemic stroke, blood supply to part of the brain is decreased, leading to dysfunction of the brain tissue in that area. There are four reasons why this might happen: Thrombosis (obstruction of a blood vessel by a blood clot forming locally) Embolism (obstruction due to an embolus from elsewhere in the body), Systemic hypoperfusion (general decrease in blood supply, e.g., in shock) Cerebral venous sinus thrombosis. Stroke without an obvious explanation is termed cryptogenic stroke (idiopathic); this constitutes 30–40% of all cases of ischemic stroke. There are classification systems for acute ischemic stroke. The Oxford Community Stroke Project classification (OCSP, also known as the Bamford or Oxford classification) relies primarily on the initial symptoms; based on the extent of the symptoms, the stroke episode is classified as total anterior circulation infarct (TACI), partial anterior circulation infarct (PACI), lacunar infarct (LACI) or posterior circulation infarct (POCI). These four entities predict the extent of the stroke, the area of the brain that is affected, the underlying cause, and the prognosis. The TOAST (Trial of Org 10172 in Acute Stroke Treatment) classification is based on clinical symptoms as well as results of further investigations; on this basis, stroke is classified as being due to (1) thrombosis or embolism due to atherosclerosis of a large artery, (2) an embolism originating in the heart, (3) complete blockage of a small blood vessel, (4) other determined cause, (5) undetermined cause (two possible causes, no cause identified, or incomplete investigation). Users of stimulants such as cocaine and methamphetamine are at a high risk for ischemic stroke. Hemorrhagic There are two main types of hemorrhagic stroke: Intracerebral hemorrhage, which is bleeding within the brain itself (when an artery in the brain bursts, flooding the surrounding tissue with blood), due to either intraparenchymal hemorrhage (bleeding within the brain tissue) or intraventricular hemorrhage (bleeding within the brain's ventricular system). Subarachnoid hemorrhage, which is bleeding that occurs outside of the brain tissue but still within the skull, and precisely between the arachnoid mater and pia mater (the delicate innermost layer of the three layers of the meninges that surround the brain). The above two main types of hemorrhagic stroke are also two different forms of intracranial hemorrhage, which is the accumulation of blood anywhere within the cranial vault; but the other forms of intracranial hemorrhage, such as epidural hematoma (bleeding between the skull and the dura mater, which is the thick outermost layer of the meninges that surround the brain) and subdural hematoma (bleeding in the subdural space), are not considered "hemorrhagic stroke". Hemorrhagic stroke may occur on the background of alterations to the blood vessels in the brain, such as cerebral amyloid angiopathy, cerebral arteriovenous malformation and an intracranial aneurysm, which can cause intraparenchymal or subarachnoid hemorrhage. In addition to neurological impairment, hemorrhagic stroke usually causes specific symptoms (for instance, subarachnoid hemorrhage classically causes a severe headache known as a thunderclap headache) or reveal evidence of a previous head injury. Signs and symptoms Stroke may be preceded by premonitory symptoms, which may indicate a stroke is imminent. These symptoms may include dizziness, dysarthria (speech disorder), exhaustion, hemiparesis (weakness on one side of the body), paresthesia (tingling, pricking, chilling, burning, numbness of the skin), pathological laughter, seizure that turns into paralysis, "thunderclap" headache, or vomiting. Premonitory symptoms are not diagnostic of a stroke, and may be a sign of other illness. Assessing onset (gradual or sudden), duration, and the presence of other associated symptoms are important, and premonitory symptoms may not appear at all or may vary depending on the type of stroke. Stroke symptoms typically start suddenly, over seconds to minutes, and in most cases do not progress further. The symptoms depend on the area of the brain affected. The more extensive the area of the brain affected, the more functions that are likely to be lost. Some forms of stroke can cause additional symptoms. For example, in intracranial hemorrhage, the affected area may compress other structures. Most forms of stroke are not associated with a headache, apart from subarachnoid hemorrhage and cerebral venous thrombosis and occasionally intracerebral hemorrhage. Early recognition Systems have been proposed to increase recognition of stroke. Sudden-onset face weakness, arm drift (i.e., if a person, when asked to raise both arms, involuntarily lets one arm drift downward) and abnormal speech are the findings most likely to lead to the correct identification of a case of stroke, increasing the likelihood by 5.5 when at least one of these is present. Similarly, when all three of these are absent, the likelihood of stroke is decreased (– likelihood ratio of 0.39). While these findings are not perfect for diagnosing stroke, the fact that they can be evaluated relatively rapidly and easily make them very valuable in the acute setting. A mnemonic to remember the warning signs of stroke is FAST (facial droop, arm weakness, speech difficulty, and time to call emergency services), as advocated by the Department of Health (United Kingdom) and the Stroke Association, the American Stroke Association, and the National Stroke Association (US). FAST is less reliable in the recognition of posterior circulation stroke. The revised mnemonic BE FAST, which adds balance (sudden trouble keeping balance while walking or standing) and eyesight (new onset of blurry or double vision or sudden, painless loss of sight) to the assessment, has been proposed to address this shortcoming and improve early detection of stroke even further. Other scales for prehospital detection of stroke include the Los Angeles Prehospital Stroke Screen (LAPSS) and the Cincinnati Prehospital Stroke Scale (CPSS), on which the FAST method was based. Use of these scales is recommended by professional guidelines. For people referred to the emergency room, early recognition of stroke is deemed important as this can expedite diagnostic tests and treatments. A scoring system called ROSIER (recognition of stroke in the emergency room) is recommended for this purpose; it is based on features from the medical history and physical examination. Associated symptoms Loss of consciousness, headache, and vomiting usually occur more often in hemorrhagic stroke than in thrombosis because of the increased intracranial pressure from the leaking blood compressing the brain. If symptoms are maximal at onset, the cause is more likely to be a subarachnoid hemorrhage or an embolic stroke. Subtypes If the area of the brain affected includes one of the three prominent central nervous system pathways—the spinothalamic tract, corticospinal tract, and the dorsal column–medial lemniscus pathway, symptoms may include: hemiplegia and muscle weakness of the face numbness reduction in sensory or vibratory sensation initial flaccidity (reduced muscle tone), replaced by spasticity (increased muscle tone), excessive reflexes, and obligatory synergies. In most cases, the symptoms affect only one side of the body (unilateral). The defect in the brain is usually on the opposite side of the body. However, since these pathways also travel in the spinal cord and any lesion there can also produce these symptoms, the presence of any one of these symptoms does not necessarily indicate stroke. In addition to the above central nervous system pathways, the brainstem gives rise to most of the twelve cranial nerves. A brainstem stroke affecting the brainstem and brain, therefore, can produce symptoms relating to deficits in these cranial nerves: altered smell, taste, hearing, or vision (total or partial) drooping of eyelid (ptosis) and weakness of ocular muscles decreased reflexes: gag, swallow, pupil reactivity to light decreased sensation and muscle weakness of the face balance problems and nystagmus altered breathing and heart rate weakness in sternocleidomastoid muscle with inability to turn head to one side weakness in tongue (inability to stick out the tongue or move it from side to side) If the cerebral cortex is involved, the central nervous system pathways can again be affected, but can also produce the following symptoms: aphasia (difficulty with verbal expression, auditory comprehension, reading and writing; Broca's or Wernicke's area typically involved) dysarthria (motor speech disorder resulting from neurological injury) apraxia (altered voluntary movements) visual field defect memory deficits (involvement of temporal lobe) hemineglect (involvement of parietal lobe) disorganized thinking, confusion, hypersexual gestures (with involvement of frontal lobe) lack of insight of his or her, usually stroke-related, disability If the cerebellum is involved, ataxia might be present and this includes: altered walking gait altered movement coordination vertigo and or disequilibrium Preceding signs and symptoms In the days before a stroke (generally in the previous 7 days, even the previous one), a considerable proportion of patients have a "sentinel headache": a severe and unusual headache that indicates a problem. Its appearance makes it advisable to seek medical review and to consider prevention against stroke. Causes Thrombotic stroke In thrombotic stroke, a thrombus (blood clot) usually forms around atherosclerotic plaques. Since blockage of the artery is gradual, onset of symptomatic thrombotic stroke is slower than that of hemorrhagic stroke. A thrombus itself (even if it does not completely block the blood vessel) can lead to an embolic stroke (see below) if the thrombus breaks off and travels in the bloodstream, at which point it is called an embolus. Two types of thrombosis can cause stroke: Large vessel disease involves the common and internal carotid arteries, the vertebral artery, and the Circle of Willis. Diseases that may form thrombi in the large vessels include (in descending incidence): atherosclerosis, vasoconstriction (tightening of the artery), aortic, carotid or vertebral artery dissection, inflammatory diseases of the blood vessel wall (Takayasu arteritis, giant cell arteritis, vasculitis), noninflammatory vasculopathy, Moyamoya disease and fibromuscular dysplasia. Strokes caused by artery dissections are in the strictest sense not always caused by a 'defined disease state', such events can occur in very young people and can be caused by physical injury such as hyperextension of the neck area or often by other forms of trauma. Small vessel disease involves the smaller arteries inside the brain: branches of the circle of Willis, middle cerebral artery, stem, and arteries arising from the distal vertebral and basilar artery. Diseases that may form thrombi in the small vessels include (in descending incidence): lipohyalinosis (build-up of fatty hyaline matter in the blood vessel as a result of high blood pressure and aging) and fibrinoid degeneration (stroke involving these vessels is known as a lacunar stroke) and microatheroma (small atherosclerotic plaques). Anemia causes increase blood flow in the blood circulatory system. This causes the endothelial cells of the blood vessels to express adhesion factors which encourages the clotting of blood and formation of thrombus. Sickle-cell anemia, which can cause blood cells to clump up and block blood vessels, can also lead to stroke. Stroke is the second leading cause of death in people under 20 with sickle-cell anemia. Air pollution may also increase stroke risk. Embolic stroke An embolic stroke refers to an arterial embolism (a blockage of an artery) by an embolus, a traveling particle or debris in the arterial bloodstream originating from elsewhere. An embolus is most frequently a thrombus, but it can also be a number of other substances including fat (e.g., from bone marrow in a broken bone), air, cancer cells or clumps of bacteria (usually from infectious endocarditis). Because an embolus arises from elsewhere, local therapy solves the problem only temporarily. Thus, the source of the embolus must be identified. Because the embolic blockage is sudden in onset, symptoms are usually maximal at the start. Also, symptoms may be transient as the embolus is partially resorbed and moves to a different location or dissipates altogether. Emboli most commonly arise from the heart (especially in atrial fibrillation) but may originate from elsewhere in the arterial tree. In paradoxical embolism, a deep vein thrombosis embolizes through an atrial or ventricular septal defect in the heart into the brain. Causes of stroke related to the heart can be distinguished between high- and low-risk: High risk: atrial fibrillation and paroxysmal atrial fibrillation, rheumatic disease of the mitral or aortic valve disease, artificial heart valves, known cardiac thrombus of the atrium or ventricle, sick sinus syndrome, sustained atrial flutter, recent myocardial infarction, chronic myocardial infarction together with ejection fraction <28 percent, symptomatic congestive heart failure with ejection fraction <30 percent, dilated cardiomyopathy, Libman-Sacks endocarditis, Marantic endocarditis, infective endocarditis, papillary fibroelastoma, left atrial myxoma, and coronary artery bypass graft (CABG) surgery. Low risk/potential: calcification of the annulus (ring) of the mitral valve, patent foramen ovale (PFO), atrial septal aneurysm, atrial septal aneurysm with patent foramen ovale, left ventricular aneurysm without thrombus, isolated left atrial "smoke" on echocardiography (no mitral stenosis or atrial fibrillation), and complex atheroma in the ascending aorta or proximal arch Among those who have a complete blockage of one of the carotid arteries, the risk of stroke on that side is about one percent per year. A special form of embolic stroke is the embolic stroke of undetermined source (ESUS). This subset of cryptogenic stroke is defined as a non-lacunar brain infarct without proximal arterial stenosis or cardioembolic sources. About one out of six cases of ischemic stroke could be classified as ESUS. Cerebral hypoperfusion Cerebral hypoperfusion is the reduction of blood flow to all parts of the brain. The reduction could be to a particular part of the brain depending on the cause. It is most commonly due to heart failure from cardiac arrest or arrhythmias, or from reduced cardiac output as a result of myocardial infarction, pulmonary embolism, pericardial effusion, or bleeding. Hypoxemia (low blood oxygen content) may precipitate the hypoperfusion. Because the reduction in blood flow is global, all parts of the brain may be affected, especially vulnerable "watershed" areas—border zone regions supplied by the major cerebral arteries. A watershed stroke refers to the condition when the blood supply to these areas is compromised. Blood flow to these areas does not necessarily stop, but instead it may lessen to the point where brain damage can occur. Venous thrombosis Cerebral venous sinus thrombosis leads to stroke due to locally increased venous pressure, which exceeds the pressure generated by the arteries. Infarcts are more likely to undergo hemorrhagic transformation (leaking of blood into the damaged area) than other types of ischemic stroke. Intracerebral hemorrhage It generally occurs in small arteries or arterioles and is commonly due to hypertension, intracranial vascular malformations (including cavernous angiomas or arteriovenous malformations), cerebral amyloid angiopathy, or infarcts into which secondary hemorrhage has occurred. Other potential causes are trauma, bleeding disorders, amyloid angiopathy, illicit drug use (e.g., amphetamines or cocaine). The hematoma enlarges until pressure from surrounding tissue limits its growth, or until it decompresses by emptying into the ventricular system, CSF or the pial surface. A third of intracerebral bleed is into the brain's ventricles. ICH has a mortality rate of 44 percent after 30 days, higher than ischemic stroke or subarachnoid hemorrhage (which technically may also be classified as a type of stroke). Other Other causes may include spasm of an artery. This may occur due to cocaine. Cancer is also another well recognized potential cause of stroke. Although, malignancy in general can increase the risk of stroke, certain types of cancer such as pancreatic, lung and gastric are typically associated with a higher thromboembolism risk. The mechanism with which cancer increases stroke risk is thought to be secondary to an acquired hypercoagulability. Silent stroke Silent stroke is stroke that does not have any outward symptoms, and people are typically unaware they had experienced stroke. Despite not causing identifiable symptoms, silent stroke still damages the brain and places the person at increased risk for both transient ischemic attack and major stroke in the future. Conversely, those who have had major stroke are also at risk of having silent stroke. In a broad study in 1998, more than 11 million people were estimated to have experienced stroke in the United States. Approximately 770,000 of these were symptomatic and 11 million were first-ever silent MRI infarcts or hemorrhages. Silent stroke typically causes lesions which are detected via the use of neuroimaging such as MRI. Silent stroke is estimated to occur at five times the rate of symptomatic stroke. The risk of silent stroke increases with age, but they may also affect younger adults and children, especially those with acute anemia. Pathophysiology Ischemic Ischemic stroke occurs because of a loss of blood supply to part of the brain, initiating the ischemic cascade. Atherosclerosis may disrupt the blood supply by narrowing the lumen of blood vessels leading to a reduction of blood flow by causing the formation of blood clots within the vessel or by releasing showers of small emboli through the disintegration of atherosclerotic plaques. Embolic infarction occurs when emboli formed elsewhere in the circulatory system, typically in the heart as a consequence of atrial fibrillation, or in the carotid arteries, break off, enter the cerebral circulation, then lodge in and block brain blood vessels. Since blood vessels in the brain are now blocked, the brain becomes low in energy, and thus it resorts to using anaerobic metabolism within the region of brain tissue affected by ischemia. Anaerobic metabolism produces less adenosine triphosphate (ATP) but releases a by-product called lactic acid. Lactic acid is an irritant which could potentially destroy cells since it is an acid and disrupts the normal acid-base balance in the brain. The ischemia area is referred to as the "ischemic penumbra". After the initial ischemic event the penumbra transitions from a tissue remodeling characterized by damage to a remodeling characterized by repair. As oxygen or glucose becomes depleted in ischemic brain tissue, the production of high energy phosphate compounds such as adenosine triphosphate (ATP) fails, leading to failure of energy-dependent processes (such as ion pumping) necessary for tissue cell survival. This sets off a series of interrelated events that result in cellular injury and death. A major cause of neuronal injury is the release of the excitatory neurotransmitter glutamate. The concentration of glutamate outside the cells of the nervous system is normally kept low by so-called uptake carriers, which are powered by the concentration gradients of ions (mainly Na+) across the cell membrane. However, stroke cuts off the supply of oxygen and glucose which powers the ion pumps maintaining these gradients. As a result, the transmembrane ion gradients run down, and glutamate transporters reverse their direction, releasing glutamate into the extracellular space. Glutamate acts on receptors in nerve cells (especially NMDA receptors), producing an influx of calcium which activates enzymes that digest the cells' proteins, lipids, and nuclear material. Calcium influx can also lead to the failure of mitochondria, which can lead further toward energy depletion and may trigger cell death due to programmed cell death. Ischemia also induces production of oxygen free radicals and other reactive oxygen species. These react with and damage a number of cellular and extracellular elements. Damage to the blood vessel lining or endothelium may occur. These processes are the same for any type of ischemic tissue and are referred to collectively as the ischemic cascade. However, brain tissue is especially vulnerable to ischemia since it has little respiratory reserve and is completely dependent on aerobic metabolism, unlike most other organs. Collateral flow The brain can compensate inadequate blood flow in a single artery by the collateral system. This system relies on the efficient connection between the carotid and vertebral arteries through the circle of Willis and, to a lesser extent, the major arteries supplying the cerebral hemispheres. However, variations in the circle of Willis, caliber of collateral vessels, and acquired arterial lesions such as atherosclerosis can disrupt this compensatory mechanism, increasing the risk of brain ischemia resulting from artery blockage. The extent of damage depends on the duration and severity of the ischemia. If ischemia persists for more than 5 minutes with perfusion below 5% of normal, some neurons will die. However, if ischemia is mild, the damage will occur slowly and may take up to 6 hours to completely destroy the brain tissue. In case of severe ischemia lasting more than 15 to 30 minutes, all of the affected tissue will die, leading to infarction. The rate of damage is affected by temperature, with hyperthermia accelerating damage and hypothermia slowing it down and other factors. Prompt restoration of blood flow to ischemic tissues can reduce or reverse injury, especially if the tissues are not yet irreversibly damaged. This is particularly important for the moderately ischemic areas (penumbras) surrounding areas of severe ischemia, which may still be salvageable due to collateral flow. Hemorrhagic Hemorrhagic stroke is classified based on their underlying pathology. Some causes of hemorrhagic stroke are hypertensive hemorrhage, ruptured aneurysm, ruptured AV fistula, transformation of prior ischemic infarction, and drug-induced bleeding. They result in tissue injury by causing compression of tissue from an expanding hematoma or hematomas. In addition, the pressure may lead to a loss of blood supply to affected tissue with resulting infarction, and the blood released by brain hemorrhage appears to have direct toxic effects on brain tissue and vasculature. Inflammation contributes to the secondary brain injury after hemorrhage. Diagnosis Stroke is diagnosed through several techniques: a neurological examination (such as the NIHSS), CT scans (most often without contrast enhancements) or MRI scans, Doppler ultrasound, and arteriography. The diagnosis of stroke itself is clinical, with assistance from the imaging techniques. Imaging techniques also assist in determining the subtypes and cause of stroke. There is yet no commonly used blood test for the stroke diagnosis itself, though blood tests may be of help in finding out the likely cause of stroke. In deceased people, an autopsy of stroke may help establishing the time between stroke onset and death. Physical examination A physical examination, including taking a medical history of the symptoms and a neurological status, helps giving an evaluation of the location and severity of stroke. It can give a standard score on e.g., the NIH stroke scale. Imaging For diagnosing ischemic (blockage) stroke in the emergency setting: CT scans (without contrast enhancements) sensitivity= 16% (less than 10% within first 3 hours of symptom onset) specificity= 96% MRI scan sensitivity= 83% specificity= 98% For diagnosing hemorrhagic stroke in the emergency setting: CT scans (without contrast enhancements) sensitivity= 89% specificity= 100% MRI scan sensitivity= 81% specificity= 100% For detecting chronic hemorrhages, an MRI scan is more sensitive. For the assessment of stable stroke, nuclear medicine scans such as single-photon emission computed tomography (SPECT) and positron emission tomography–computed tomography (PET/CT) may be helpful. SPECT documents cerebral blood flow, whereas PET with an FDG isotope shows cerebral glucose metabolism. CT scans may not detect ischemic stroke, especially if it is small, of recent onset, or in the brainstem or cerebellum areas (posterior circulation infarct). MRI is better at detecting a posterior circulation infarct with diffusion-weighted imaging. A CT scan is used more to rule out certain stroke mimics and detect bleeding. The presence of leptomeningeal collateral circulation in the brain is associated with better clinical outcomes after recanalization treatment. Cerebrovascular reserve capacity is another factor that affects stroke outcome it is the amount of increase in cerebral blood flow after a purposeful stimulation of blood flow by the physician, such as by giving inhaled carbon dioxide or intravenous acetazolamide. The increase in blood flow can be measured by PET scan or transcranial doppler sonography. However, in people with obstruction of the internal carotid artery of one side, the presence of leptomeningeal collateral circulation is associated with reduced cerebral reserve capacity. Underlying cause When stroke has been diagnosed, other studies may be performed to determine the underlying cause. With the treatment and diagnosis options available, it is of particular importance to determine whether there is a peripheral source of emboli. Test selection may vary since the cause of stroke varies with age, comorbidity and the clinical presentation. The following are commonly used techniques: an ultrasound/doppler study of the carotid arteries (to detect carotid stenosis) or dissection of the precerebral arteries; an electrocardiogram (ECG) and echocardiogram (to identify arrhythmias and resultant clots in the heart which may spread to the brain vessels through the bloodstream); a Holter monitor study to identify intermittent abnormal heart rhythms; an angiogram of the cerebral vasculature (if a bleed is thought to have originated from an aneurysm or arteriovenous malformation); blood tests to determine if blood cholesterol is high, if there is an abnormal tendency to bleed, and if some rarer processes such as homocystinuria might be involved. For hemorrhagic stroke, a CT or MRI scan with intravascular contrast may be able to identify abnormalities in the brain arteries (such as aneurysms) or other sources of bleeding, and structural MRI if this shows no cause. If this too does not identify an underlying reason for the bleeding, invasive cerebral angiography could be performed but this requires access to the bloodstream with an intravascular catheter and can cause further stroke as well as complications at the insertion site and this investigation is therefore reserved for specific situations. If there are symptoms suggesting that the hemorrhage might have occurred as a result of venous thrombosis, CT or MRI venography can be used to examine the cerebral veins. Misdiagnosis Among people with ischemic stroke, misdiagnosis occurs 2 to 26% of the time. A "stroke chameleon" (SC) is stroke which is diagnosed as something else. People not having stroke may also be misdiagnosed with the condition. Giving thrombolytics (clot-busting) in such cases causes intracerebral bleeding 1 to 2% of the time, which is less than that of people with stroke. This unnecessary treatment adds to health care costs. Even so, the AHA/ASA guidelines state that starting intravenous tPA in possible mimics is preferred to delaying treatment for additional testing. Women, African-Americans, Hispanic-Americans, Asian and Pacific Islanders are more often misdiagnosed for a condition other than stroke when in fact having stroke. In addition, adults under 44 years of age are seven times more likely to have stroke missed than are adults over 75 years of age. This is especially the case for younger people with posterior circulation infarcts. Some medical centers have used hyperacute MRI in experimental studies for people initially thought to have a low likelihood of stroke, and in some of these people, stroke has been found which were then treated with thrombolytic medication. Prevention Given the disease burden of stroke, prevention is an important public health concern. Primary prevention is less effective than secondary prevention (as judged by the number needed to treat to prevent one stroke per year). Recent guidelines detail the evidence for primary prevention in stroke. About the use of aspirin as a preventive medication for stroke, in healthy people aspirin does not appear beneficial and thus is not recommended, but in people with high cardiovascular risk, or those who have had a myocardial infarction, it provides some protection against a first stroke. In those who have previously had stroke, treatment with medications such as aspirin, clopidogrel, and dipyridamole may be beneficial. The U.S. Preventive Services Task Force (USPSTF) recommends against screening for carotid artery stenosis in those without symptoms. Risk factors The most important modifiable risk factors for stroke are high blood pressure and atrial fibrillation, although the size of the effect is small; 833 people have to be treated for 1 year to prevent one stroke. Other modifiable risk factors include high blood cholesterol levels, diabetes mellitus, end-stage kidney disease, cigarette smoking (active and passive), heavy alcohol use, drug use, lack of physical activity, obesity, processed red meat consumption, and unhealthy diet. Smoking just one cigarette per day increases the risk more than 30%. Alcohol use could predispose to ischemic stroke, as well as intracerebral and subarachnoid hemorrhage via multiple mechanisms (for example, via hypertension, atrial fibrillation, rebound thrombocytosis and platelet aggregation and clotting disturbances). Drugs, most commonly amphetamines and cocaine, can induce stroke through damage to the blood vessels in the brain and acute hypertension. Migraine with aura doubles a person's risk for ischemic stroke. Untreated, celiac disease regardless of the presence of symptoms can be an underlying cause of stroke, both in children and adults. According to a 2021 WHO study, working 55+ hours a week raises the risk of stroke by 35% and the risk of dying from heart conditions by 17%, when compared to a 35-40-hour week. High levels of physical activity reduce the risk of stroke by about 26%. There is a lack of high quality studies looking at promotional efforts to improve lifestyle factors. Nonetheless, given the large body of circumstantial evidence, best medical management for stroke includes advice on diet, exercise, smoking and alcohol use. Medication is the most common method of stroke prevention; carotid endarterectomy can be a useful surgical method of preventing stroke. Blood pressure High blood pressure accounts for 35–50% of stroke risk. Blood pressure reduction of 10 mmHg systolic or 5 mmHg diastolic reduces the risk of stroke by ~40%. Lowering blood pressure has been conclusively shown to prevent both ischemic and hemorrhagic stroke. It is equally important in secondary prevention. Even people older than 80 years and those with isolated systolic hypertension benefit from antihypertensive therapy. The available evidence does not show large differences in stroke prevention between antihypertensive drugs—therefore, other factors such as protection against other forms of cardiovascular disease and cost should be considered. The routine use of beta-blockers following stroke or TIA has not been shown to result in benefits. Blood lipids High cholesterol levels have been inconsistently associated with (ischemic) stroke. Statins have been shown to reduce the risk of stroke by about 15%. Since earlier meta-analyses of other lipid-lowering drugs did not show a decreased risk, statins might exert their effect through mechanisms other than their lipid-lowering effects. Diabetes mellitus Diabetes mellitus increases the risk of stroke by 2 to 3 times. While intensive blood sugar control has been shown to reduce small blood vessel complications such as kidney damage and damage to the retina of the eye it has not been shown to reduce large blood vessel complications such as stroke. Anticoagulant drugs Oral anticoagulants such as warfarin have been the mainstay of stroke prevention for over 50 years. However, several studies have shown that aspirin and other antiplatelets are highly effective in secondary prevention after stroke or transient ischemic attack. Low doses of aspirin (for example 75–150 mg) are as effective as high doses but have fewer side effects; the lowest effective dose remains unknown. Thienopyridines (clopidogrel, ticlopidine) might be slightly more effective than aspirin and have a decreased risk of gastrointestinal bleeding but are more expensive. Both aspirin and clopidogrel may be useful in the first few weeks after a minor stroke or high-risk TIA. Clopidogrel has less side effects than ticlopidine. Dipyridamole can be added to aspirin therapy to provide a small additional benefit, even though headache is a common side effect. Low-dose aspirin is also effective for stroke prevention after having a myocardial infarction. Those with atrial fibrillation have a 5% a year risk of stroke, and those with valvular atrial fibrillation have an even higher risk. Depending on the stroke risk, anticoagulation with medications such as warfarin or aspirin is useful for prevention with various levels of comparative effectiveness depending on the type of treatment used. Oral anticoagulants, especially Xa (apixaban) and thrombin (dabigatran) inhibitors, have been shown to be superior to warfarin in stroke reduction and have a lower or similar bleeding risk in patients with atrial fibrillation. Except in people with atrial fibrillation, oral anticoagulants are not advised for stroke prevention—any benefit is offset by bleeding risk. In primary prevention, however, antiplatelet drugs did not reduce the risk of ischemic stroke but increased the risk of major bleeding. Further studies are needed to investigate a possible protective effect of aspirin against ischemic stroke in women. Surgery Carotid endarterectomy or carotid angioplasty can be used to remove atherosclerotic narrowing of the carotid artery. There is evidence supporting this procedure in selected cases. Endarterectomy for a significant stenosis has been shown to be useful in preventing further stroke in those who have already had the condition. Carotid artery stenting has not been shown to be equally useful. People are selected for surgery based on age, gender, degree of stenosis, time since symptoms and the person's preferences. Surgery is most efficient when not delayed too long—the risk of recurrent stroke in a person who has a 50% or greater stenosis is up to 20% after 5 years, but endarterectomy reduces this risk to around 5%. The number of procedures needed to cure one person was 5 for early surgery (within two weeks after the initial stroke), but 125 if delayed longer than 12 weeks. Screening for carotid artery narrowing has not been shown to be a useful test in the general population. Studies of surgical intervention for carotid artery stenosis without symptoms have shown only a small decrease in the risk of stroke. To be beneficial, the complication rate of the surgery should be kept below 4%. Even then, for 100 surgeries, 5 people will benefit by avoiding stroke, 3 will develop stroke despite surgery, 3 will develop stroke or die due to the surgery itself, and 89 will remain stroke-free but would also have done so without intervention. Diet Nutrition, specifically the Mediterranean-style diet, has the potential to decrease the risk of having a stroke by more than half. It does not appear that lowering levels of homocysteine with folic acid affects the risk of stroke. Women A number of specific recommendations have been made for women including taking aspirin after the 11th week of pregnancy if there is a history of previous chronic high blood pressure and taking blood pressure medications during pregnancy if the blood pressure is greater than 150 mmHg systolic or greater than 100 mmHg diastolic. In those who have previously had preeclampsia, other risk factors should be treated more aggressively. Previous stroke or TIA Keeping blood pressure below 140/90 mmHg is recommended. Anticoagulation can prevent recurrent ischemic stroke. Among people with nonvalvular atrial fibrillation, anticoagulation can reduce stroke by 60% while antiplatelet agents can reduce stroke by 20%. However, a recent meta-analysis suggests harm from anticoagulation started early after an embolic stroke. Stroke prevention treatment for atrial fibrillation is determined according to the CHA2DS2–VASc score. The most widely used anticoagulant to prevent thromboembolic stroke in people with nonvalvular atrial fibrillation is the oral agent warfarin while a number of newer agents including dabigatran are alternatives which do not require prothrombin time monitoring. Anticoagulants, when used following stroke, should not be stopped for dental procedures. If studies show carotid artery stenosis, and the person has a degree of residual function on the affected side, carotid endarterectomy (surgical removal of the stenosis) may decrease the risk of recurrence if performed rapidly after stroke. Management Stroke, whether ischemic or hemorrhagic, is an emergency that warrants immediate medical attention. The specific treatment will depend on the type of stroke, the time elapsed since the onset of symptoms, and the underlying cause or presence of comorbidities. Ischemic stroke Aspirin reduces the overall risk of recurrence by 13% with greater benefit early on. Definitive therapy within the first few hours is aimed at removing the blockage by breaking the clot down (thrombolysis), or by removing it mechanically (thrombectomy). The philosophical premise underlying the importance of rapid stroke intervention was summed up as Time is Brain! in the early 1990s. Years later, that same idea, that rapid cerebral blood flow restoration results in fewer brain cells dying, has been proved and quantified. Tight blood sugar control in the first few hours does not improve outcomes and may cause harm. High blood pressure is also not typically lowered as this has not been found to be helpful. Cerebrolysin, a mixture of pig brain-derived neurotrophic factors used widely to treat acute ischemic stroke in China, Eastern Europe, Russia, post-Soviet countries, and other Asian countries, does not improve outcomes or prevent death and may increase the risk of severe adverse events. There is also no evidence that cerebrolysin‐like peptide mixtures which are extracted from cattle brain is helpful in treating acute ischemic stroke. Thrombolysis Thrombolysis, such as with recombinant tissue plasminogen activator (rtPA), in acute ischemic stroke, when given within three hours of symptom onset, results in an overall benefit of 10% with respect to living without disability. It does not, however, improve chances of survival. Benefit is greater the earlier it is used. Between three and four and a half hours the effects are less clear. The AHA/ASA recommend it for certain people in this time frame. A 2014 review found a 5% increase in the number of people living without disability at three to six months; however, there was a 2% increased risk of death in the short term. After four and a half hours thrombolysis worsens outcomes. These benefits or lack of benefits occurred regardless of the age of the person treated. There is no reliable way to determine who will have an intracranial bleed post-treatment versus who will not. In those with findings of savable tissue on medical imaging between 4.5 hours and 9 hours or who wake up with stroke, alteplase results in some benefit. Its use is endorsed by the American Heart Association, the American College of Emergency Physicians and the American Academy of Neurology as the recommended treatment for acute stroke within three hours of onset of symptoms as long as there are no other contraindications (such as abnormal lab values, high blood pressure, or recent surgery). This position for tPA is based upon the findings of two studies by one group of investigators which showed that tPA improves the chances for a good neurological outcome. When administered within the first three hours thrombolysis improves functional outcome without affecting mortality. 6.4% of people with large stroke developed substantial brain bleeding as a complication from being given tPA thus part of the reason for increased short term mortality. The American Academy of Emergency Medicine had previously stated that objective evidence regarding the applicability of tPA for acute ischemic stroke was insufficient. In 2013 the American College of Emergency Medicine refuted this position, acknowledging the body of evidence for the use of tPA in ischemic stroke; but debate continues. Intra-arterial fibrinolysis, where a catheter is passed up an artery into the brain and the medication is injected at the site of thrombosis, has been found to improve outcomes in people with acute ischemic stroke. Endovascular treatment Mechanical removal of the blood clot causing the ischemic stroke, called mechanical thrombectomy, is a potential treatment for occlusion of a large artery, such as the middle cerebral artery. In 2015, one review demonstrated the safety and efficacy of this procedure if performed within 12 hours of the onset of symptoms. It did not change the risk of death but did reduce disability compared to the use of intravenous thrombolysis, which is generally used in people evaluated for mechanical thrombectomy. Certain cases may benefit from thrombectomy up to 24 hours after the onset of symptoms. Craniectomy Stroke affecting large portions of the brain can cause significant brain swelling with secondary brain injury in surrounding tissue. This phenomenon is mainly encountered in stroke affecting brain tissue dependent upon the middle cerebral artery for blood supply and is also called "malignant cerebral infarction" because it carries a dismal prognosis. Relief of the pressure may be attempted with medication, but some require hemicraniectomy, the temporary surgical removal of the skull on one side of the head. This decreases the risk of death, although some people – who would otherwise have died – survive with disability. Hemorrhagic stroke People with intracerebral hemorrhage require supportive care, including blood pressure control if required. People are monitored for changes in the level of consciousness, and their blood sugar and oxygenation are kept at optimum levels. Anticoagulants and antithrombotics can make bleeding worse and are generally discontinued (and reversed if possible). A proportion may benefit from neurosurgical intervention to remove the blood and treat the underlying cause, but this depends on the location and the size of the hemorrhage as well as patient-related factors, and ongoing research is being conducted into the question as to which people with intracerebral hemorrhage may benefit. In subarachnoid hemorrhage, early treatment for underlying cerebral aneurysms may reduce the risk of further hemorrhages. Depending on the site of the aneurysm this may be by surgery that involves opening the skull or endovascularly (through the blood vessels). Stroke unit Ideally, people who have had stroke are admitted to a "stroke unit", a ward or dedicated area in a hospital staffed by nurses and therapists with experience in stroke treatment. It has been shown that people admitted to stroke units have a higher chance of surviving than those admitted elsewhere in hospital, even if they are being cared for by doctors without experience in stroke. Nursing care is fundamental in maintaining skin care, feeding, hydration, positioning, and monitoring vital signs such as temperature, pulse, and blood pressure. Rehabilitation Stroke rehabilitation is the process by which those with disabling stroke undergo treatment to help them return to normal life as much as possible by regaining and relearning the skills of everyday living. It also aims to help the survivor understand and adapt to difficulties, prevent secondary complications, and educate family members to play a supporting role. Stroke rehabilitation should begin almost immediately with a multidisciplinary approach. The rehabilitation team may involve physicians trained in rehabilitation medicine, neurologists, clinical pharmacists, nursing staff, physiotherapists, occupational therapists, speech-language pathologists, and orthotists. Some teams may also include psychologists and social workers, since at least one-third of affected people manifests post stroke depression. Validated instruments such as the Barthel scale may be used to assess the likelihood of a person who has had stroke being able to manage at home with or without support subsequent to discharge from a hospital. Stroke rehabilitation should be started as quickly as possible and can last anywhere from a few days to over a year. Most return of function is seen in the first few months, and then improvement falls off with the "window" considered officially by U.S. state rehabilitation units and others to be closed after six months, with little chance of further improvement. However, some people have reported that they continue to improve for years, regaining and strengthening abilities like writing, walking, running, and talking. Daily rehabilitation exercises should continue to be part of the daily routine for people who have had stroke. Complete recovery is unusual but not impossible and most people will improve to some extent: proper diet and exercise are known to help the brain to recover. Spatial neglect The body of evidence is uncertain on the efficacy of cognitive rehabilitation for reducing the disabling effects of neglect and increasing independence remains unproven. However, there is limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. Overall, no rehabilitation approach can be supported by evidence for spatial neglect. Automobile driving The body of evidence is uncertain whether the use of rehabilitation can improve on-road driving skills following stroke. There is limited evidence that training on a driving simulator will improve performance on recognizing road signs after training. The findings are based on low-quality evidence as further research is needed involving large numbers of participants. Yoga Based on low quality evidence, it is uncertain whether yoga has a significant benefit for stroke rehabilitation on measures of quality of life, balance, strength, endurance, pain, and disability scores. Yoga may reduce anxiety and could be included as part of patient-centred stroke rehabilitation. Further research is needed assessing the benefits and safety of yoga in stroke rehabilitation. Action observation physical therapy for upper limbs Low-quality evidence suggests that action observation (a type of physiotherapy that is meant to improve neural plasticity through the mirror-neuronal system) may be of some benefit and has no significant adverse effects, however this benefit may not be clinically significant and further research is suggested. Cognitive rehabilitation for attention deficits The body of scientific evidence is uncertain on the effectiveness of cognitive rehabilitation for attention deficits in patients following stroke. While there may be an immediate effect after treatment on attention, the findings are based on low to moderate quality and small number of studies. Further research is needed to assess whether the effect can be sustained in day-to-day tasks requiring attention. Motor imagery for gait rehabilitation The latest evidence supports the short-term benefits of motor imagery (MI) on walking speed in individuals who have had stroke, in comparison to other therapies. MI does not improve motor function after stroke and does not seem to cause significant adverse events. The findings are based on low-quality evidence as further research is needed to estimate the effect of MI on walking endurance and the dependence on personal assistance. Physical and occupational therapy Physical and occupational therapy have overlapping areas of expertise; however, physical therapy focuses on joint range of motion and strength by performing exercises and relearning functional tasks such as bed mobility, transferring, walking and other gross motor functions. Physiotherapists can also work with people who have had stroke to improve awareness and use of the hemiplegic side. Rehabilitation involves working on the ability to produce strong movements or the ability to perform tasks using normal patterns. Emphasis is often concentrated on functional tasks and people's goals. One example physiotherapists employ to promote motor learning involves constraint-induced movement therapy. Through continuous practice the person relearns to use and adapt the hemiplegic limb during functional activities to create lasting permanent changes. Physical therapy is effective for recovery of function and mobility after stroke. Occupational therapy is involved in training to help relearn everyday activities known as the activities of daily living (ADLs) such as eating, drinking, dressing, bathing, cooking, reading and writing, and toileting. Approaches to helping people with urinary incontinence include physical therapy, cognitive therapy, and specialized interventions with experienced medical professionals, however, it is not clear how effective these approaches are at improving urinary incontinence following stroke. Treatment of spasticity related to stroke often involves early mobilizations, commonly performed by a physiotherapist, combined with elongation of spastic muscles and sustained stretching through different positions. Gaining initial improvement in range of motion is often achieved through rhythmic rotational patterns associated with the affected limb. After full range has been achieved by the therapist, the limb should be positioned in the lengthened positions to prevent against further contractures, skin breakdown, and disuse of the limb with the use of splints or other tools to stabilize the joint. Cold ice wraps or ice packs may briefly relieve spasticity by temporarily reducing neural firing rates. Electrical stimulation to the antagonist muscles or vibrations has also been used with some success. Physical therapy is sometimes suggested for people who experience sexual dysfunction following stroke. Interventions for age-related visual problems in patients with stroke With the prevalence of vision problems increasing with age in stroke patients, the overall effect of interventions for age-related visual problems is uncertain. It is also not sure whether people with stroke respond differently from the general population when treating eye problems. Further research in this area is needed as the body of evidence is very low quality. Speech and language therapy Speech and language therapy is appropriate for people with the speech production disorders: dysarthria and apraxia of speech, aphasia, cognitive-communication impairments, and problems with swallowing. Speech and language therapy for aphasia following stroke improves functional communication, reading, writing and expressive language. Speech and language therapy that is higher intensity, higher dose or provided over a long duration of time leads to significantly better functional communication but people might be more likely to drop out of high intensity treatment (up to 15 hours per week). A total of 20–50 hours of speech and language therapy is necessary for the best recovery. The most improvement happens when 2–5 hours of therapy is provided each week over 4–5 days. Recovery is further improved when besides the therapy people practice tasks at home. Speech and language therapy is also effective if it is delivered online through video or by a family member who has been trained by a professional therapist. Recovery with therapy for aphasia is also dependent on the recency of stroke and the age of the person. Receiving therapy within a month after the stroke leads to the greatest improvements. 3 or 6 months after the stroke more therapy will be needed but symptoms can still be improved. People with aphasia who are younger than 55 years are the most likely to improve but people older than 75 years can still get better with therapy. People who have had stroke may have particular problems, such as dysphagia, which can cause swallowed material to pass into the lungs and cause aspiration pneumonia. The condition may improve with time, but in the interim, a nasogastric tube may be inserted, enabling liquid food to be given directly into the stomach. If swallowing is still deemed unsafe, then a percutaneous endoscopic gastrostomy (PEG) tube is passed and this can remain indefinitely. Swallowing therapy has mixed results as of 2018. Devices Often, assistive technology such as wheelchairs, walkers and canes may be beneficial. Many mobility problems can be improved by the use of ankle foot orthoses. Physical fitness Stroke can also reduce people's general fitness. Reduced fitness can reduce capacity for rehabilitation as well as general health. Physical exercises as part of a rehabilitation program following stroke appear safe. Cardiorespiratory fitness training that involves walking in rehabilitation can improve speed, tolerance and independence during walking, and may improve balance. There are inadequate long-term data about the effects of exercise and training on death, dependence and disability after stroke. The future areas of research may concentrate on the optimal exercise prescription and long-term health benefits of exercise. The effect of physical training on cognition also may be studied further. The ability to walk independently in their community, indoors or outdoors, is important following stroke. Although no negative effects have been reported, it is unclear if outcomes can improve with these walking programs when compared to usual treatment. Other therapy methods Some current and future therapy methods include the use of virtual reality and video games for rehabilitation. These forms of rehabilitation offer potential for motivating people to perform specific therapy tasks that many other forms do not. While virtual reality and interactive video gaming are not more effective than conventional therapy for improving upper limb function, when used in conjunction with usual care these approaches may improve upper limb function and ADL function. There are inadequate data on the effect of virtual reality and interactive video gaming on gait speed, balance, participation and quality of life. Many clinics and hospitals are adopting the use of these off-the-shelf devices for exercise, social interaction, and rehabilitation because they are affordable, accessible and can be used within the clinic and home. Mirror therapy is associated with improved motor function of the upper extremity in people who have had stroke. Other non-invasive rehabilitation methods used to augment physical therapy of motor function in people recovering from stroke include neurotherapy as transcranial magnetic stimulation and transcranial direct-current stimulation. and robotic therapies. Constraint‐induced movement therapy (CIMT), mental practice, mirror therapy, interventions for sensory impairment, virtual reality and a relatively high dose of repetitive task practice may be effective in improving upper limb function. However, further primary research, specifically of CIMT, mental practice, mirror therapy and virtual reality is needed. Orthotics Clinical studies confirm the importance of orthoses in stroke rehabilitation. The orthosis supports the therapeutic applications and also helps to mobilize the patient at an early stage. With the help of an orthosis, physiological standing and walking can be learned again, and late health consequences caused by a wrong gait pattern can be prevented. A treatment with an orthosis can therefore be used to support the therapy. Self-management Stroke can affect the ability to live independently and with quality. Self-management programs are a special training that educates stroke survivors about stroke and its consequences, helps them acquire skills to cope with their challenges, and helps them set and meet their own goals during their recovery process. These programs are tailored to the target audience, and led by someone trained and expert in stroke and its consequences (most commonly professionals, but also stroke survivors and peers). A 2016 review reported that these programs improve the quality of life after stroke, without negative effects. People with stroke felt more empowered, happy and satisfied with life after participating in this training. Prognosis Disability affects 75% of stroke survivors enough to decrease their ability to work. Stroke can affect people physically, mentally, emotionally, or a combination of the three. The results of stroke vary widely depending on size and location of the lesion. Physical effects Some of the physical disabilities that can result from stroke include muscle weakness, numbness, pressure sores, pneumonia, incontinence, apraxia (inability to perform learned movements), difficulties carrying out daily activities, appetite loss, speech loss, vision loss and pain. If the stroke is severe enough, or in a certain location such as parts of the brainstem, coma or death can result. Up to 10% of people following stroke develop seizures, most commonly in the week subsequent to the event; the severity of the stroke increases the likelihood of a seizure. An estimated 15% of people experience urinary incontinence for more than a year following stroke. 50% of people have a decline in sexual function (sexual dysfunction) following stroke. Emotional and mental effects Emotional and mental dysfunctions correspond to areas in the brain that have been damaged. Emotional problems following stroke can be due to direct damage to emotional centers in the brain or from frustration and difficulty adapting to new limitations. Post-stroke emotional difficulties include anxiety, panic attacks, flat affect (failure to express emotions), mania, apathy and psychosis. Other difficulties may include a decreased ability to communicate emotions through facial expression, body language and voice. Disruption in self-identity, relationships with others, and emotional well-being can lead to social consequences after stroke due to the lack of ability to communicate. Many people who experience communication impairments after stroke find it more difficult to cope with the social issues rather than physical impairments. Broader aspects of care must address the emotional impact speech impairment has on those who experience difficulties with speech after stroke. Those who experience a stroke are at risk of paralysis, which could result in a self-disturbed body image, which may also lead to other social issues. 30 to 50% of stroke survivors develop post-stroke depression, which is characterized by lethargy, irritability, sleep disturbances, lowered self-esteem and withdrawal. It is most common in those with a stroke affecting the anterior parts of the brain or the basal ganglia, particularly on the left side. Depression can reduce motivation and worsen outcome, but can be treated with social and family support, psychotherapy and, in severe cases, antidepressants. Psychotherapy sessions may have a small effect on improving mood and preventing depression after stroke. Antidepressant medications may be useful for treating depression after stroke but are associated with central nervous system and gastrointestinal adverse events. Emotional lability, another consequence of stroke, causes the person to switch quickly between emotional highs and lows and to express emotions inappropriately, for instance with an excess of laughing or crying with little or no provocation. While these expressions of emotion usually correspond to the person's actual emotions, a more severe form of emotional lability causes the affected person to laugh and cry pathologically, without regard to context or emotion. Some people show the opposite of what they feel, for example crying when they are happy. Emotional lability occurs in about 20% of those who have had stroke. Those with a right hemisphere stroke are more likely to have empathy problems which can make communication harder. Cognitive deficits resulting from stroke include perceptual disorders, aphasia, dementia, and problems with attention and memory. Stroke survivors may be unaware of their own disabilities, a condition called anosognosia. In a condition called hemispatial neglect, the affected person is unable to attend to anything on the side of space opposite to the damaged hemisphere. Cognitive and psychological outcome after stroke can be affected by the age at which the stroke happened, pre-stroke baseline intellectual functioning, psychiatric history and whether there is pre-existing brain pathology. Epidemiology Stroke was the second most frequent cause of death worldwide in 2011, accounting for 6.2 million deaths (~11% of the total). Approximately 17 million people had stroke in 2010 and 33 million people have previously had stroke and were still alive. Between 1990 and 2010 the incidence of stroke decreased by approximately 10% in the developed world and increased by 10% in the developing world. Overall, two-thirds of stroke occurred in those over 65 years old. South Asians are at particularly high risk of stroke, accounting for 40% of global stroke deaths. Incidence of ischemic stroke is ten times more frequent than haemorrhagic stroke. It is ranked after heart disease and before cancer. In the United States stroke is a leading cause of disability, and recently declined from the third leading to the fourth leading cause of death. Geographic disparities in stroke incidence have been observed, including the existence of a "stroke belt" in the southeastern United States, but causes of these disparities have not been explained. The risk of stroke increases exponentially from 30 years of age, and the cause varies by age. Advanced age is one of the most significant stroke risk factors. 95% of stroke occurs in people age 45 and older, and two-thirds of stroke occurs in those over the age of 65. A person's risk of dying if he or she does have stroke also increases with age. However, stroke can occur at any age, including in childhood. Family members may have a genetic tendency for stroke or share a lifestyle that contributes to stroke. Higher levels of Von Willebrand factor are more common amongst people who have had ischemic stroke for the first time. The results of this study found that the only significant genetic factor was the person's blood type. Having stroke in the past greatly increases one's risk of future stroke. Men are 25% more likely to develop stroke than women, yet 60% of deaths from stroke occur in women. Since women live longer, they are older on average when they have stroke and thus more often killed. Some risk factors for stroke apply only to women. Primary among these are pregnancy, childbirth, menopause, and the treatment thereof (HRT). History Episodes of stroke and familial stroke have been reported from the 2nd millennium BC onward in ancient Mesopotamia and Persia. Hippocrates (460 to 370 BC) was first to describe the phenomenon of sudden paralysis that is often associated with ischemia. Apoplexy, from the Greek word meaning "struck down with violence", first appeared in Hippocratic writings to describe this phenomenon. The word stroke was used as a synonym for apoplectic seizure as early as 1599, and is a fairly literal translation of the Greek term. The term apoplectic stroke is an archaic, nonspecific term, for a cerebrovascular accident accompanied by haemorrhage or haemorrhagic stroke. Martin Luther was described as having an apoplectic stroke that deprived him of his speech shortly before his death in 1546. In 1658, in his Apoplexia, Johann Jacob Wepfer (1620–1695) identified the cause of hemorrhagic stroke when he suggested that people who had died of apoplexy had bleeding in their brains. Wepfer also identified the main arteries supplying the brain, the vertebral and carotid arteries, and identified the cause of a type of ischemic stroke known as a cerebral infarction when he suggested that apoplexy might be caused by a blockage to those vessels. Rudolf Virchow first described the mechanism of thromboembolism as a major factor. The term cerebrovascular accident was introduced in 1927, reflecting a "growing awareness and acceptance of vascular theories and (...) recognition of the consequences of a sudden disruption in the vascular supply of the brain". Its use is now discouraged by a number of neurology textbooks, reasoning that the connotation of fortuitousness carried by the word accident insufficiently highlights the modifiability of the underlying risk factors. Cerebrovascular insult may be used interchangeably. The term brain attack was introduced for use to underline the acute nature of stroke according to the American Stroke Association, which has used the term since 1990, and is used colloquially to refer to both ischemic as well as hemorrhagic stroke. Research As of 2017, angioplasty and stents were under preliminary clinical research to determine the possible therapeutic advantages of these procedures in comparison to therapy with statins, antithrombotics, or antihypertensive drugs. Animal models indicate that administration of low-dose amphetamine facilitates behavioural recovery following ischemic stroke, when administered several days after the ischemic event. This is accompanied by reductions in volumes of tissue lost, increases in fractional anisotropy ratio on the affected side, and increases in BDNF expression, matrix metalloproteinase activity, and expression of synaptophysin.
Biology and health sciences
Illness and injury
null
626035
https://en.wikipedia.org/wiki/Bed%20load
Bed load
The term bed load or bedload describes particles in a flowing fluid (usually water) that are transported along the stream bed. Bed load is complementary to suspended load and wash load. Bed load moves by rolling, sliding, and/or saltating (hopping). Generally, bed load downstream will be smaller and more rounded than bed load upstream (a process known as downstream fining). This is due in part to attrition and abrasion which results from the stones colliding with each other and against the river channel, thus removing the rough texture (rounding) and reducing the size of the particles. However, selective transport of sediments also plays a role in relation to downstream fining: smaller-than average particles are more easily entrained than larger-than average particles, since the shear stress required to entrain a grain is linearly proportional to the diameter of the grain. However, the degree of size selectivity is restricted by the hiding effect described by Parker and Klingeman (1982), wherein larger particles protrude from the bed whereas small particles are shielded and hidden by larger particles, with the result that nearly all grain sizes become entrained at nearly the same shear stress. Experimental observations suggest that a uniform free-surface flow over a cohesion-less plane bed is unable to entrain sediments below a critical value of the ratio between measures of hydrodynamic (destabilizing) and gravitational (stabilizing) forces acting on sediment particles, the so-called Shields stress . This quantity reads as: , where is the friction velocity, s is the relative particle density, d is an effective particle diameter which is entrained by the flow, and g is gravity. Meyer-Peter-Müller formula for the bed load capacity under equilibrium and uniform flow conditions states that the magnitude of the bed load flux for unit width is proportional to the excess of shear stress with respect to a critical one . Specifically, is a monotonically increasing nonlinear function of the excess Shields stress , typically expressed in the form of a power law.
Physical sciences
Sedimentology
Earth science
627009
https://en.wikipedia.org/wiki/Potassium%20cyanide
Potassium cyanide
Potassium cyanide is a compound with the formula KCN. It is a colorless salt, similar in appearance to sugar, that is highly soluble in water. Most KCN is used in gold mining, organic synthesis, and electroplating. Smaller applications include jewellery for chemical gilding and buffing. Potassium cyanide is highly toxic, and a dose of 200 to 300 milligrams will kill nearly any human. The moist solid emits small amounts of hydrogen cyanide due to hydrolysis (reaction with water). Hydrogen cyanide is often described as having an odor resembling that of bitter almonds. The taste of potassium cyanide has been described as acrid and bitter, with a burning sensation similar to lye. Production KCN is produced by treating hydrogen cyanide with an aqueous solution of potassium hydroxide, followed by evaporation of the solution in a vacuum: About 50,000 tons of potassium cyanide are produced yearly. For laboratory purpose it is easier to pass hydrogen cyanide through an alcoholic solution of potassium base because the crystals of potassium cyanide are not soluble in alcohol . Historical production Before 1900 and the invention of the Castner process, potassium cyanide was the most important source of alkali metal cyanides. In this historical process, potassium cyanide was produced by decomposing potassium ferrocyanide: Structure In aqueous solution, KCN is dissociated into hydrated potassium (K+) ions and cyanide (CN−) ions. As a solid, KCN has structure resembling sodium chloride: with each potassium ion surrounded by six cyanide ions, and vice versa. Despite being diatomic, and thus less symmetric than chloride, the cyanide ions rotate so rapidly that their time-averaged shape is spherical. At low temperature and high pressure, this free rotation is hindered, resulting in a less symmetric crystal structure with the cyanide ions arranged in sheets. Applications KCN and sodium cyanide (NaCN) are widely used in organic synthesis for the preparation of nitriles and carboxylic acids, particularly in the von Richter reaction. It also finds use for the synthesis of hydantoins, which can be useful synthetic intermediates, when reacted with a carbonyl compound such as an aldehyde or ketone in the presence of ammonium carbonate. KCN is used as a photographic fixer in the wet plate collodion process. The KCN dissolves silver where it has not been made insoluble by the developer. This reveals and stabilizes the image, making it no longer sensitive to light. Modern wet plate photographers may prefer less toxic fixers, often opting for sodium thiosulfate, but KCN is still used. In the 19th century, cyanogen soap, a preparation containing potassium cyanide, was used by photographers to remove silver stains from their hands. Potassium gold cyanide In gold mining, KCN forms the water-soluble salt potassium gold cyanide (or gold potassium cyanide) and potassium hydroxide from gold metal in the presence of oxygen (usually from the surrounding air) and water: 4 Au + 8 KCN + O2 + 2 H2O → 4 K[Au(CN)2] + 4 KOH A similar process uses NaCN to produce sodium gold cyanide (NaAu(CN2)). Toxicity Potassium cyanide is a potent inhibitor of cellular respiration, acting on mitochondrial cytochrome c oxidase, hence blocking oxidative phosphorylation. Lactic acidosis then occurs as a consequence of anaerobic metabolism. Initially, acute cyanide poisoning causes a red or ruddy complexion in the victim because the tissues are not able to use the oxygen in the blood. The effects of potassium cyanide and sodium cyanide are identical, and symptoms of poisoning typically occur within a few minutes of ingesting the substance: the person loses consciousness, and brain death eventually follows. During this period the victim may suffer convulsions. Death is caused by histotoxic hypoxia/cerebral hypoxia. The expected LD100 dose (human) for potassium cyanide is 200–300 mg while the median lethal dose LD50 is estimated at 140 mg. People who killed themselves, were killed, or killed someone else using potassium cyanide include: Viktor Meyer, 19th-century German chemist, died by suicide in 1897 after taking cyanide Gustav Wied, Danish novelist, poet, and playwright, in 1914 Pritilata Waddedar, an Indian revolutionary nationalist, took cyanide in 1932 to avoid capture by Indian Imperial Police, British India Badal Gupta, a revolutionary from Bengal, who launched an attack on the Writers' Building in Kolkata, consumed cyanide in 1930 immediately after the attack. Wallace Carothers, polymer chemist who died by suicide in 1937 after battling depression for years Senior figures in Nazi Germany, such as Erwin Rommel, Hitler's longtime companion Eva Braun, Joseph Goebbels, Heinrich Himmler, and Hermann Göring Alan Turing, a computer scientist who died of cyanide poisoning in 1954 Ronald Clark O'Bryan, a Texas optician who killed his son by lacing a pixy stick with potassium cyanide in 1974 Peoples Temple, the 1978 cult suicide in (Jonestown), Guyana Members of the LTTE involved in the assassination of Indian prime minister Rajiv Gandhi in 1991 Ramon Sampedro, Spanish tetraplegic and activist whose assisted suicide in 1998 provoked a national debate about euthanasia, and was the subject of the Oscar-winning film The Sea Inside Jason Altom, a promising graduate student in the lab of Nobel Prize–winning chemist EJ Corey at Harvard, died after drinking potassium cyanide in 1998 Slobodan Praljak, a wartime general in Republic of Croatia, died by suicide by drinking from a vial containing potassium cyanide during the reading of his appeal judgment in The Hague on International Criminal Tribunal for the former Yugoslavia (ICTY) on 29 November 2017. It is used by professional entomologists as a killing agent in collecting jars, as insects succumb within seconds to the HCN fumes it emits, thereby minimizing damage to even highly fragile specimens. KCN can be detoxified most efficiently with hydrogen peroxide or with a solution of sodium hypochlorite (NaOCl). Such solutions should be kept alkaline whenever possible so as to eliminate the possibility of generation of hydrogen cyanide: KCN + H2O2 → KOCN + H2O KCN + NaOCl → KOCN + NaCl
Physical sciences
Cyanide salts
Chemistry
627188
https://en.wikipedia.org/wiki/Tragopan
Tragopan
Tragopan is a bird genus in the pheasant family Phasianidae. Member of the genus are commonly called "horned pheasants" because males have two brightly colored, fleshy horns on their head that can be erected during courtship displays. The habit of tragopans to nest in trees is unique among phasianids. Taxonomy The genus Tragopan was introduced by the French naturalist Georges Cuvier in 1829 for the satyr tragopan. The name tragopan is a mythical horned purple-headed bird mentioned by the Roman authors Pliny and Pomponius Mela. The genus contains five species.
Biology and health sciences
Galliformes
Animals
627476
https://en.wikipedia.org/wiki/Udder
Udder
An udder is an organ formed of two or four mammary glands on the females of dairy animals and ruminants such as cattle, goats, and sheep. An udder is equivalent to the breast in primates, elephantine pachyderms and other mammals. The udder is a single mass hanging beneath the animal, consisting of pairs of mammary glands with protruding teats. In cattle, camels and deer, there are normally two pairs, in sheep and goats, there is one pair, and in some animals, there are many pairs. In animals with udders, the mammary glands develop on the milk line near the groin. Mammary glands that develop on the chest (such as in primates and elephants) are generally referred to as breasts. Udder care and hygiene in cows is important in milking, aiding uninterrupted and untainted milk production, and preventing mastitis. Products exist to soothe the chapped skin of the udder. This helps prevent bacterial infection, and reduces irritation during milking by the cups, and so the cow is less likely to kick the cups off. It has been demonstrated that incorporating nutritional supplements into diet, including vitamin E, is an additional method of improving udder health and reducing infection. Etymology Udder has been attested in Middle English as or (also as , ), and in Old English as . It was evolved from the Proto-Germanic reconstructed root *eudrą or *ūdrą, which in turn descended from Proto-Indo-European *h₁ówHdʰr̥ (“udder”). It is cognate with Saterland Frisian (“udder”), Dutch (“udder”), German (“udder”), Swedish (“udder”), Icelandic (“udder”), Vedic Sanskrit ऊधर् (ū́dhar), Ancient Greek (), and Latin . As food The udder, or elder in Ireland, Scotland and northern England, of a slaughtered cow was in times past prepared and consumed. In other countries, like Italy, parts of Pakistan, and some South American countries, cow udder is still consumed in dishes like the traditional and ubres asada.
Biology and health sciences
Integumentary system
Biology
627537
https://en.wikipedia.org/wiki/Cladonia%20rangiferina
Cladonia rangiferina
Cladonia rangiferina, also known as reindeer cup lichen, reindeer lichen (cf. Sw. renlav) or grey reindeer lichen, is a light-coloured fruticose, cup lichen species in the family Cladoniaceae. It grows in both hot and cold climates in well-drained, open environments. Found primarily in areas of alpine tundra, it is extremely cold-hardy. Other common names include reindeer moss, deer moss, and caribou moss, but these names can be misleading since it is, though somewhat moss-like in appearance, not a moss. As the common names suggest, reindeer lichen is an important food for reindeer (caribou), and has economic importance as a result. Synonyms include Cladina rangiferina and Lichen rangiferinus. Taxonomy Cladonia rangiferina was first scientifically described by Carl Linnaeus in his 1753 Species Plantarum; as was the custom at the time, he classified it in the eponymous genus, as Lichen rangiferinus. Friedrich Heinrich Wiggers transferred it to the genus Cladonia in 1780. Description Thalli are fruticose, and extensively branched, with each branch usually dividing into three or four (sometimes two); the thicker branches are typically in diameter. The colour is greyish, whitish or brownish grey. C. rangiferina forms extensive mats up to tall. The branching is at a smaller angle than that of Cladonia portentosa. It lacks a well-defined cortex (a protective layer covering the thallus, analogous to the epidermis in plants), but rather, a loose layer of hyphae cover the photobionts. The photobiont associated with the reindeer lichen is Trebouxia irregularis. Reindeer lichen, like many lichens, is slow growing ( per year) and may take decades to return once overgrazed, burned, trampled, or otherwise damaged. A similar-looking but distinct species, also known by the common name "reindeer lichen", is Cladonia portentosa. Chemistry A variety of bioactive compounds have been isolated and identified from C. rangiferina, including abietane, labdane, isopimarane, the abietane diterpenoids hanagokenols A and B, obtuanhydride, sugiol, 5,6-dehydrosugiol, montbretol, cis-communic acid, imbricatolic acid, 15-acetylimbricatoloic acid, junicedric acid, 7α-hydroxysandaracopimaric acid, β-resorylic acid, atronol, barbatic acid, homosekikaic acid, didymic acid and condidymic acid. Some of these compounds have mild inhibitory activities against methicillin-resistant Staphylococcus aureus and vancomycin-resistant Enterococci. Exposure to UV-B radiation induces the accumulation of usnic acid and melanic compounds. Usnic acid is thought to play a role in protecting the photosymbiont by absorbing excess UV-B. Resynthesis Resynthesis experiments have been conducted to study the early stages of lichen formation in Cladonia rangiferina. These experiments involve isolating and culturing the fungal and algal partners separately, then reuniting them under laboratory conditions to observe the process of lichenization. Through these studies, researchers have identified several key stages in the early development of the lichen thallus. The first stage, known as the pre-contact stage, occurs around one day post co-inoculation. During this stage, no apparent fungal or algal growth is observed, and hyphal tips are not growing towards algal cells. By the eighth day post co-inoculation, the contact stage is reached. This stage is characterised by rich branching of fungal hyphae with short internodes. Hyphal tips grow towards algal cells, and some form swollen tips called appressoria upon contact. Hyphae can be observed growing around single algal cells or clusters, and mucilage is frequently present. The growth together stage is typically observed around 21 days post co-inoculation. At this point, coordinated growth between the fungus and alga becomes evident. Algal cells are integrated within a hyphal matrix, with hyphae emerging through algal colonies and forming networks within and between them. Quantitative measurements during these stages reveal several patterns. In compatible interactions, researchers observe significantly shorter hyphal internode lengths and more lateral branches compared to incompatible ones. The frequency of appressoria formation increases over time in compatible interactions. There is no significant reduction in algal cell diameter in compatible interactions, unlike in some incompatible pairings. These experiments highlight the specificity of the Cladonia rangiferina – Asterochloris glomerata/irregularis symbiosis. When paired with incompatible algae such as Coccomyxa peltigerae or Chloroidium ellipsoideum, C. rangiferina shows reduced growth and fewer symbiosis-specific morphological changes. The resynthesis process in C. rangiferina appears to be slower compared to some other lichen species. Researchers have not observed a well-organised prethallus stage even after three months of co-cultivation. This may be due to specific environmental requirements or growth conditions needed for complete thallus formation in this species. These studies provide insights into the recognition mechanisms and early developmental processes involved in lichen formation. The observations support the concept of controlled parasitism in lichen symbiosis, where the fungal partner exhibits parasitic behavior, but in a controlled manner that allows for mutual benefit in the long term. Habitat Cladonia rangiferina often dominates the ground in boreal pine forests and open, low-alpine sites in a wide range of habitats, from humid, open forests, rocks and heaths. It grows on humus, or on soil over rock. It is mainly found in the taiga and the tundra. A specific biome in which this lichen is represented is the boreal forests of Canada. Ecology In a Finnish study of the growth rate of Cladonia rangiferina, it was found that the lichen grows from 3.9 to 4.4 mm per year, achieving the fastest growth rate in younger (less than 60 years), shadowy forests, and the slowest growth in an older (more than 180 years), thinned forest. Cladonia rangiferina is a known host to the lichenicolous fungus species Lichenopeltella rangiferinae, which is named after C. rangiferina, Lichenoconium pyxidatae and Lichenopeltella uncialicola Conservation In certain parts of its range, this lichen is an endangered species. For example, in the British Duchy of Cornwall it is protected under the UK Biodiversity Action Plan. Uses The reindeer lichen is edible, but crunchy. It can be soaked with wood ashes to remove its bitterness, then added to milk or other dishes. It is a source of vitamin D. This lichen can be used in the making of aquavit, and is sometimes used as decoration in glass windows. The lichen is used as a traditional remedy for removal of kidney stones by the Monpa in the alpine regions of the West Kameng district of Eastern Himalaya. The Inland Dena'ina used reindeer lichen for food by crushing the dry lichen and then boiling it or soaking it in hot water until it becomes soft. They eat it plain or, preferably, mixed with berries, fish eggs, or lard. The Inland Dena'ina also boil reindeer lichen and drink the juice as a medicine for diarrhea. Acids present in lichens mean their consumption may cause an upset stomach, especially if not well cooked. According to a study published in 2017, reindeer lichen was able to grow on burnt soil as soon as two years after a forest fire in Northern Sweden, indicating that artificial replanting of lichen could be a useful strategy for the restoration of reindeer pastures.
Biology and health sciences
Lichens
Plants
2271221
https://en.wikipedia.org/wiki/Carbonate%20mineral
Carbonate mineral
Carbonate minerals are those minerals containing the carbonate ion, . Carbonate divisions Anhydrous carbonates Calcite group: trigonal Calcite CaCO3 Gaspéite (Ni,Mg,Fe2+)CO3 Magnesite MgCO3 Otavite CdCO3 Rhodochrosite MnCO3 Siderite FeCO3 Smithsonite ZnCO3 Spherocobaltite CoCO3 Aragonite group: orthorhombic Aragonite CaCO3 Cerussite PbCO3 Strontianite SrCO3 Witherite BaCO3 Rutherfordine UO2CO3 Natrite Na2CO3 Anhydrous carbonates with compound formulas Dolomite group: trigonal Ankerite CaFe(CO3)2 Dolomite CaMg(CO3)2 Huntite Mg3Ca(CO3)4 Minrecordite CaZn(CO3)2 Barytocalcite BaCa(CO3)2 Carbonates with hydroxyl or halogen Carbonate with hydroxide: monoclinic Azurite Cu3(CO3)2(OH)2 Hydrocerussite Pb3(CO3)2(OH)2 Malachite Cu2CO3(OH)2 Rosasite (Cu,Zn)2CO3(OH)2 Phosgenite Pb2(CO3)Cl2 Hydrozincite Zn5(CO3)2(OH)6 Aurichalcite (Zn,Cu)5(CO3)2(OH)6 Hydrated carbonates Hydromagnesite Mg5(CO3)4(OH)2.4H2O Ikaite CaCO3·6(H2O) Lansfordite MgCO3·5(H2O) Monohydrocalcite CaCO3·H2O Natron Na2CO3·10(H2O) Zellerite Ca(UO2)(CO3)2·5(H2O) The carbonate class in both the Dana and the Strunz classification systems include the nitrates. Nickel–Strunz classification -05- carbonates IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses the classification of Nickel–Strunz (mindat.org, 10 ed, pending publication). Abbreviations: "*" – discredited (IMA/CNMNC status). "?" – questionable/doubtful (IMA/CNMNC status). "REE" – Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) "PGE" – Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt) 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates: Neso: insular (from Greek νησος nēsos, island) Soro: grouping (from Greek σωροῦ sōros, heap, mound (especially of corn)) Cyclo: ring Ino: chain (from Greek ις [genitive: ινος inos], fibre) Phyllo: sheet (from Greek φύλλον phyllon, leaf) Tekto: three-dimensional framework Nickel–Strunz code scheme: NN.XY.##x NN: Nickel–Strunz mineral class number X: Nickel–Strunz mineral division letter Y: Nickel–Strunz mineral family letter ##x: Nickel–Strunz mineral/group number, x add-on letter Class: carbonates 05.A Carbonates without additional anions, without H2O 05.AA Alkali carbonates: 05 Zabuyelite; 10 Gregoryite, 10 Natrite; 15 Nahcolite, 20 Kalicinite, 25 Teschemacherite, 30 Wegscheiderite 05.AB Alkali-earth (and other M2+) carbonates: 05 Calcite, 05 Gaspeite, 05 Magnesite, 05 Rhodochrosite, 05 Otavite, 05 Spherocobaltite, 05 Siderite, 05 Smithsonite; 10 Ankerite, 10 Dolomite, 10 Kutnohorite, 10 Minrecordite; 15 Cerussite, 15 Aragonite, 15 Strontianite, 15 Witherite; 20 Vaterite, 25 Huntite, 30 Norsethite, 35 Alstonite; 40 Olekminskite, 40 Paralstonite; 45 Barytocalcite, 50 Carbocernaite, 55 Benstonite, 60 Juangodoyite 05.AC Alkali and alkali-earth carbonates: 05 Eitelite, 10 Nyerereite, 10 Natrofairchildite, 10 Zemkorite; 15 Butschliite, 20 Fairchildite, 25 Shortite; 30 Sanromanite, 30 Burbankite, 30 Calcioburbankite, 30 Khanneshite 05.AD With rare-earth elements (REE): 05 Sahamalite-(Ce); 15 Rémondite-(Ce), 15 Petersenite-(Ce), 15 Rémondite-(La); 20 Paratooite-(La) 05.B Carbonates with additional anions, without H2O 05.BA With Cu, Co, Ni, Zn, Mg, Mn: 05 Azurite, 10 Chukanovite, 10 Malachite, 10 Georgeite, 10 Pokrovskite, 10 Nullaginite, 10 Glaukosphaerite, 10 Mcguinnessite, 10 Kolwezite, 10 Rosasite, 10 Zincrosasite; 15 Aurichalcite, 15 Hydrozincite; 20 Holdawayite, 25 Defernite; 30 Loseyite, 30 Sclarite 05.BB With alkalies, etc.: 05 Barentsite, 10 Dawsonite, 15 Tunisite, 20 Sabinaite 05.BC With alkali-earth cations: 05 Brenkite, 10 Rouvilleite, 15 Podlesnoite 05.BD With rare-earth elements (REE): 05 Cordylite-(Ce), 05 Lukechangite-(Ce); 10 Kukharenkoite-(La), 10 Kukharenkoite-(Ce), 10 Zhonghuacerite-(Ce); 15 Cebaite-(Nd), 15 Cebaite-(Ce); 20a Bastnasite-(Ce), 20a Bastnasite-(La), 20a Bastnasite-(Y), 20a Hydroxylbastnasite-(Ce), 20a Hydroxylbastnasite-(La), 20a Hydroxylbastnasite-(Nd), 20a Thorbastnasite, 20b Parisite-(Nd), 20b Parisite-(Ce), 20c Synchysite-(Ce), 20c Synchysite-(Nd), 20c Synchysite-(Y), 20d Rontgenite-(Ce); 25 Horvathite-(Y), 30 Qaqarssukite-(Ce), 35 Huanghoite-(Ce) 05.BE With Pb, Bi: 05 Shannonite, 10 Hydrocerussite, 15 Plumbonacrite, 20 Phosgenite, 25 Bismutite, 30 Kettnerite, 35 Beyerite 05.BF With (Cl), SO4, PO4, TeO3: 05 Northupite, 05 Ferrotychite, 05 Manganotychite, 05 Tychite; 10 Bonshtedtite, 10 Crawfordite, 10 Bradleyitev, 10 Sidorenkite, 15 Daqingshanite-(Ce), 20 Reederite-(Y), 25 Mineevite-(Y), 30 Brianyoungite, 35 Philolithite; 40 Macphersonitev, 40 Susannite, 40 Leadhillite 05.C Carbonates without additional anions, with H2O 05.CA With medium-sized cations: 05 Nesquehonite, 10 Lansfordite, 15 Barringtonite, 20 Hellyerite 05.CB With large cations (alkali and alkali-earth carbonates): 05 Thermonatrite, 10 Natron, 15 Trona, 20 Monohydrocalcite, 25 Ikaite, 30 Pirssonite, 35 Gaylussite, 40 Chalconatronite, 45 Baylissite, 50 Tuliokite 05.CC With rare-earth elements (REE): 05 Donnayite-(Y), 05 Mckelveyite-(Nd)*, 05 Mckelveyite-(Y), 05 Weloganite; 10 Tengerite-(Y), 15 Lokkaite-(Y); 20 Shomiokite-(Y), 20 IMA2008-069; 25 Calkinsite-(Ce), 25 Lanthanite-(Ce), 25 Lanthanite-(La), 25 Lanthanite-(Nd); 30 Adamsite-(Y), 35 Decrespignyite-(Y), 40 Galgenbergite-(Ce), 45 Ewaldite, 50 Kimuraite-(Y) 05.D Carbonates with additional anions, with H2O 05.DA With medium-sized cations: 05 Dypingite, 05 Giorgiosite, 05 Hydromagnesite, 05 Widgiemoolthalite; 10 Artinite, 10 Chlorartinite; 15 Otwayite, 20 Kambaldaite, 25 Callaghanite, 30 Claraite; 35 Hydroscarbroite, 35 Scarbroite; 40 Charmarite-3T, 40 Charmarite-2H, 40 Caresite, 40 Quintinite-2H, 40 Quintinite-3T; 45 Brugnatellite, 45 Barbertonite, 45 Chlormagaluminite, 45 Zaccagnaite, 45 Manasseite, 45 Sjogrenite; 50 Desautelsite, 50 Comblainite, 50 Hydrotalcite, 50 Pyroaurite, 50 Reevesite, 50 Stichtite, 50 Takovite; 55 Coalingite, 60 Karchevskyite, 65 Indigirite, 70 Zaratite 05.DB With large and medium-sized cations: 05 Alumohydrocalcite, 05 Para-alumohydrocalcite, 05 Nasledovite; 10 Dresserite, 10 Dundasite, 10 Strontiodresserite, 10 Petterdite, 10 Kochsandorite; 15 Hydrodresserite, 20 Schuilingite-(Nd), 25 Sergeevite, 30 Szymanskiite, 35 Montroyalite 05.DC With large cations: 05 Ancylite-(Ce), 05 Ancylite-(La), 05 Gysinite-(Nd), 05 Calcioancylite-(Ce), 05 Calcioancylite-(Nd), 05 Kozoite-(La), 05 Kozoite-(Nd); 10 Kamphaugite-(Y), 15 Sheldrickite, 20 Thomasclarkite-(Y), 25 Peterbaylissite, 30 Clearcreekite, 35 Niveolanite 05.E Uranyl carbonates 05.EA UO2:CO3 > 1:1: 10 Urancalcarite, 15 Wyartite, 20 Oswaldpeetersite, 25 Roubaultite, 30 Kamotoite-(Y), 35 Sharpite 05.EB UO2:CO3 = 1:1: 05 Rutherfordine, 10 Blatonite, 15 Joliotite, 20 Bijvoetite-(Y) 05.EC UO2:CO3 < 1:1 - 1:2: 05 Fontanite; 10 Metazellerite, 10 Zellerite 05.ED UO2:CO3 = 1:3: 05 Bayleyite, 10 Swartzite, 15 Albrechtschraufite, 20 Liebigite, 25 Rabbittite, 30 Andersonite, 35 Grimselite, 40 Widenmannite, 45 Znucalite, 50 Cejkaite 05.EE UO2:CO3 = 1:4: 05 Voglite, 10 Shabaite-(Nd) 05.EF UO2:CO3 = 1:5: 05 Astrocyanite-(Ce) 05.EG With SO4 or SiO4: 05 Schrockingerite, 10 Lepersonnite-(Gd)
Physical sciences
Minerals
Earth science
2272102
https://en.wikipedia.org/wiki/Protoplanetary%20nebula
Protoplanetary nebula
A protoplanetary nebula or preplanetary nebula (PPN, plural PPNe) is an astronomical object which is at the short-lived episode during a star's rapid evolution between the late asymptotic giant branch (LAGB) phase and the subsequent planetary nebula (PN) phase. A PPN emits strongly in infrared radiation, and is a kind of reflection nebula. It is the second-from-the-last high-luminosity evolution phase in the life cycle of intermediate-mass stars (1–8 ). Naming The name protoplanetary nebula is an unfortunate choice due to the possibility of confusion with the same term being sometimes employed when discussing the unrelated concept of protoplanetary disks. The name protoplanetary nebula is a consequence of the older term planetary nebula, which was chosen due to early astronomers looking through telescopes and finding a similarity in appearance of planetary nebula to the gas giants such as Neptune and Uranus. To avoid any possible confusion, suggested employing a new term preplanetary nebula which does not overlap with any other disciplines of astronomy. They are often referred to as post-AGB stars, although that category also includes stars that will never ionize their ejected matter. Evolution Beginning During the late asymptotic giant branch (LAGB) phase, when mass loss reduces the hydrogen envelope's mass to around 10−2  for a core mass of 0.60 , a star will begin to evolve towards the blue side of the Hertzsprung–Russell diagram. When the hydrogen envelope has been further reduced to around 10−3 , the envelope will have been so disrupted that it is believed further significant mass loss is not possible. At this point, the effective temperature of the star, T*, will be around 5,000 K and it is defined to be the end of the LAGB and the beginning of the PPN. Protoplanetary nebula phase During the ensuing protoplanetary nebula phase, the central star's effective temperature will continue rising as a result of the envelope's mass loss as a consequence of the hydrogen shell's burning. During this phase, the central star is still too cool to ionize the slow-moving circumstellar shell ejected during the preceding AGB phase. However, the star does appear to drive high-velocity, collimated winds which shape and shock this shell, and almost certainly entrain slow-moving AGB ejecta to produce a fast molecular wind. Observations and high-resolution imaging studies from 1998 to 2001, demonstrate that the rapidly evolving PPN phase ultimately shapes the morphology of the subsequent PN. At a point during or soon after the AGB envelope detachment, the envelope shape changes from roughly spherically symmetric to axially symmetric. The resultant morphologies are bipolar, knotty jets and Herbig–Haro-like "bow shocks". These shapes appear even in relatively "young" PPNe. End The PPN phase continues until the central star reaches around 30,000 K and it is hot enough (producing enough ultraviolet radiation) to ionize the circumstellar nebula (ejected gases) and it becomes a kind of emission nebula called a Planetary Nebula. This transition must take place in less than around 10,000 years or else the density of the circumstellar envelope will fall below the PN formulation density threshold of around 100 per cm3 and no PN will result, such a case is sometimes referred to as a 'lazy planetary nebula'. Recent conjectures Bujarrabal et al. (2001) found that the "interacting stellar winds" model of Kwok et al. (1978) of radiatively-driven winds is insufficient to account for their CO observations of PPN fast winds which imply high momentum and energy inconsistent with that model. Complementarily, theorists (Soker & Livio 1994; Reyes-Ruiz & Lopez 1999; Soker & Rappaport 2000; Blackman, Frank & Welch 2001) investigated whether accretion disk scenarios, similar to models used to explain jets from active galactic nuclei and young stars, could account for both the point symmetry and the high degree of collimation seen in many PPN jets. In such models applied to the PPN context, the accretion disk forms through binary interactions. Magneto-centrifugal launching from the disk surface is then a way to convert gravitational energy into the kinetic energy of a fast wind in these systems. If the accretion-disk jet paradigm is correct and magneto-hydrodynamics (MHD) processes mediate the energetics and collimation of PPN outflows, then they will also determine physics of the shocks in these flows, and this can be confirmed with high-resolution pictures of the emission regions that go with the shocks.
Physical sciences
Stellar astronomy
Astronomy
2273006
https://en.wikipedia.org/wiki/Columbian%20mammoth
Columbian mammoth
The Columbian mammoth (Mammuthus columbi) is an extinct species of mammoth that inhabited North America from southern Canada to Costa Rica during the Pleistocene epoch. The Columbian mammoth descended from Eurasian steppe mammoths that colonised North America during the Early Pleistocene around 1.5–1.3 million years ago, and later experienced hybridisation with the woolly mammoth lineage. The Columbian mammoth was among the last mammoth species, and the pygmy mammoths evolved from them on the Channel Islands of California. The closest extant relative of the Columbian and other mammoths is the Asian elephant. Reaching at the shoulders and in weight, the Columbian mammoth was one of the largest species of mammoth, larger than the woolly mammoth and the African bush elephant. It had long, curved tusks and four molars at a time, which were replaced six times during the lifetime of an individual. It most likely used its tusks and trunk like modern elephants—for manipulating objects, fighting, and foraging. Bones, hair, dung, and stomach contents have been discovered, but no preserved carcasses are known. The Columbian mammoth preferred open areas, such as parkland landscapes, and fed on sedges, grasses, and other plants. It did not live in the Arctic regions of Canada, which were instead inhabited by woolly mammoths. The ranges of the two species may have overlapped, and genetic evidence suggests that they interbred. Several sites contain the skeletons of multiple Columbian mammoths, either because they died in incidents such as a drought, or because these locations were natural traps in which individuals accumulated over time. For a few thousand years prior to their extinction, Columbian mammoths coexisted in North America with Paleoindians – the first humans to inhabit the Americas – who hunted them for food, used their bones for making tools, and possibly depicted them in ancient art. Columbian mammoth remains have been found in association with Clovis culture artifacts. The Clovis peoples are suggested to have been specialised mammoth hunters, though they possibly also scavenged their remains. The last Columbian mammoths are dated to about ~12,000 years ago, with the species becoming extinct as part of the end-Pleistocene extinction event, simultaneously with most other large (megafaunal) mammals present in the Americas. It is one of the last recorded North American megafauna to have gone extinct. The extinction of the Columbian mammoth and other American megafauna was most likely a result of habitat loss caused by climate change, hunting by humans, or a combination of both. Taxonomy Around 1725, enslaved Africans digging in the vicinity of the Stono River in South Carolina unearthed 3-4 molar teeth now known to have belonged to Columbian mammoths, which were subsequently examined by the British naturalist Mark Catesby, who visited the site, and published his account of the visit in 1743. While the slave owners were puzzled by the objects and suggested that they originated from the great flood described in the Bible, Catesby noted that the slaves unanimously agreed that the objects were in fact the teeth of elephants, similar to those of African elephants that they were familiar with from their homeland, to which Catesby concurred, marking the first technical identification of any fossil animal in North America. A similar observation was made in 1782 after enslaved Africans had excavated mammoth bones and teeth from a salt marsh in Virginia. These remains were subsequently sent on by US army commander Arthur Campbell to future US president Thomas Jefferson. Campbell noted in a letter that several Africans had seen one of the teeth, and “All … pronounced it an elephant.” Catesby's account was later noted by the French paleontologist Georges Cuvier around the beginning of the 19th century, with Cuvier personally examining the teeth from Stono, which he used to support his theory of catastrophism. The Columbian mammoth was first scientifically described in 1857 by the Scottish naturalist Hugh Falconer, who named the species Elephas columbi after the explorer Christopher Columbus. The animal was brought to Falconer's attention in 1846 by the Scottish geologist Charles Lyell, who sent him molar fragments found during the 1838 excavation of the Brunswick–Altamaha Canal in Georgia, in the southeastern United States. At the time, similar fossils from across North America were attributed to woolly mammoths (then Elephas primigenius). Falconer found that his specimens were distinct, confirming his conclusion by examining their internal structure and studying additional molars from Mexico. Although scientists William Phipps Blake and Richard Owen believed that E. texianus was more appropriate for the species, Falconer rejected the name; he also suggested that E. imperator and E. jacksoni, two other American elephants described from molars, were based on remains too fragmentary to classify properly. More complete material that may be from the same quarry as Falconer's fragmentary holotype molar (which is cataloged as specimen BMNH 40769 at the British Museum of Natural History) was reported in 2012, and could help shed more light on that specimen, since doubts about its adequacy as a holotype have been raised. In the early 20th century, the taxonomy of extinct elephants became increasingly complicated. In 1942, the American paleontologist Henry F. Osborn's posthumous monograph on the Proboscidea was published, wherein he used various generic and subgeneric names that had previously been proposed for extinct elephant species, such as Archidiskodon, Metarchidiskodon, Parelephas, and Mammonteus. Osborn also retained names for many regional and intermediate subspecies or "varieties", and created recombinations such as Parelephas columbi felicis and Archidiskodon imperator maibeni. The taxonomic situation was simplified by various researchers from the 1970s onwards; all species of mammoth were retained in the genus Mammuthus, and many proposed differences between species were instead interpreted as intraspecific variation. In 2003, the American paleontologist Larry Agenbroad reviewed opinions about North American mammoth taxonomy, and concluded that several species had been declared junior synonyms, and that M. columbi (the Columbian mammoth) and M. exilis (the pygmy mammoth) were the only species of mammoth endemic to the Americas (as other species lived both there and in Eurasia). The idea that species such as M. imperator (the imperial mammoth) and M. jeffersoni (Jefferson's mammoth) were either more primitive or advanced stages in Columbian mammoth evolution was largely dismissed, and they were regarded as synonyms. In spite of these conclusions, Agenbroad cautioned that American mammoth taxonomy is not yet fully resolved. Evolution The earliest known members of Proboscidea, the clade that contains the elephants, existed about 55 million years ago around the Tethys Sea area. The closest living relatives of the Proboscidea are the sirenians (dugongs and manatees) and the hyraxes (an order of small, herbivorous mammals). The family Elephantidae existed six million years ago in Africa, and includes the living elephants and the mammoths. Among many now extinct clades, the mastodon (Mammut) is only a distant relative, and part of the distinct family Mammutidae, which diverged 25 million years before the mammoths evolved. The Asian elephant (Elephas maximus) is the closest extant relative of the mammoths. The following cladogram shows the placement of the Columbian mammoth among other elephantids, based on a 2018 genetic study: Since many remains of each species of mammoth are known from several localities, reconstructing the evolutionary history of the genus is possible through morphological studies. Mammoth species can be identified from the number of enamel ridges (or lamellar plates) on their molars; primitive species had few ridges, and the number increased gradually as new species evolved to feed on more abrasive food items. The crowns of the teeth became taller in height and the skulls became taller to accommodate this. At the same time, the skulls became shorter from front to back to reduce the weight of the head. The short, tall skulls of woolly and Columbian mammoths are the culmination of this process. The first known members of the genus Mammuthus are the African species M. subplanifrons from the Pliocene, and M. africanavus from the Pleistocene. The former is thought to be the ancestor of later forms. Mammoths entered Europe around 3 million years ago. The earliest European mammoth has been named M. rumanus; it spread across Europe and China. Only its molars are known, which show that it had 8–10 enamel ridges. A population evolved 12–14 ridges, splitting off from and replacing the earlier type, becoming M. meridionalis about 2.0–1.7 million years ago. In turn, this species was replaced by the steppe mammoth (M. trogontherii) with 18–20 ridges, which evolved in eastern Asia around 2.0–1.5 million years ago. The Columbian mammoth evolved from a population of M. trogontherii that had crossed the Bering Strait and entered North America about 1.5-1.3 million years ago; it retained a similar number of molar ridges. Mammoths derived from M. trogontherii evolved molars with 26 ridges 400,000 years ago in Siberia and became the woolly mammoth (M. primigenius). Woolly mammoths entered North America about 100,000 years ago. A population of mammoths derived from Columbian mammoths that lived between 80,000 and 13,000 years ago on the Channel Islands of California, away from the mainland, evolved to be less than half the size of the mainland Columbian mammoths. They are, therefore, considered to be the distinct species M. exilis, the pygmy mammoth (or a subspecies, M. c. exilis). These mammoths presumably reached the islands by swimming there when sea levels were lower, and decreased in size due to the limited food provided by the islands' small areas. Bones of larger specimens have also been found on the islands, but whether these were stages in the dwarfing process, or later arrivals of Columbian mammoths is unknown. Hybridization A 2011 ancient DNA study of the complete mitochondrial genome (inherited through the female line) showed that two examined Columbian mammoths, including the morphologically typical "Huntington mammoth", were grouped within a subclade of woolly mammoths. This suggests that the two populations interbred and produced fertile offspring. One possible explanation is introgression of a haplogroup from woolly to Columbian mammoths, or vice versa. A similar situation has been documented in modern species of African elephant (Loxodonta), the African bush elephant (L. africana) and the African forest elephant (L. cyclotis). The authors of the study also suggest that the North American type formerly referred to as M. jeffersonii may have been a hybrid between the two species, as it is apparently morphologically intermediate. These findings were unexpected, and other researchers requested further study to clarify the situation. A 2015 study of mammoth molars confirmed that M. columbi evolved from Eurasian M. trogontherii, not M. meridionalis as had been suggested earlier, and noted that M. columbi and M. trogontherii were so similar in morphology that their classification as separate species may be questionable. The study also suggested that the animals in the range where M. columbi and M. primigenius overlapped formed a metapopulation of hybrids with varying morphology. In 2016, a genetic study of North American mammoth specimens confirmed that the mitochondrial diversity of M. columbi was nested within that of M. primigenius and suggested that both species interbred extensively, were both descended from M. trogontherii, and concluded that morphological differences between fossils may, therefore, not be reliable for determining taxonomy. The authors also questioned whether M. columbi and M. primigenius should be considered "good species", considering that they were able to interbreed after supposedly being separated for a million years, but cautioned that more specimens need to be sampled. In 2021, DNA older than a million years was sequenced for the first time, from two steppe mammoth-like teeth of Early Pleistocene age found in eastern Siberia. One tooth from Adyocha (1-1.3 million years old) belonged to a lineage that was ancestral to later woolly mammoths, whereas the other from Krestovka (1.1–1.65 million years old) belonged to new lineage, possibly a distinct species, that is estimated to have split from the ancestors of woolly mammoths around 2.7-1.8 million years ago. The study found that a large proportion of the ancestry of Columbian mammoths came from the Krestovka lineage, which were probably representative of the first mammoths to have colonised North America, and another substantial contribution coming from early representatives of the woolly mammoth lineage, with the hybridisation between the two lineages likely happening at least 420,000 years ago, during the Middle Pleistocene, resulting in the Columbian mammoths of the Late Pleistocene having around 40-50% ancestry from the Krestovka lineage, and 50-60% related to woolly mammoths. Later woolly and Columbian mammoths also interbred occasionally, and mammoth species perhaps hybridized routinely when brought together by glacial expansion. The study also found that genetic adaptations to cold environments, such as hair growth and fat deposits, were already present in the steppe mammoth lineage, and was not unique to woolly mammoths. This research has raised questions about which material the name Mammuthus columbi should be applied to, as there is no obvious difference in tooth morphology between Early Pleistocene presumably pre-hybridisation North American mammoths and later Pleistocene M. columbi. In a 2024 review, Adrian Lister and Love Dalén argued that M. columbi should be retained in a broad sense covering the entire time-period of mammoth occupation of North America. Description The average male Columbian mammoth is estimated to have had a shoulder height of and a weight of , though large males may have reached in shoulder height and in weight. This mammoth was about the same size or somewhat smaller than the earlier mammoth species M. meridionalis and M. trogontherii, but was larger than the modern African bush elephant and the woolly mammoth, both of which reached about at the shoulder. Males were generally larger and more robust. The best indication of sex is the size of the pelvic girdle, since the opening that functions as the birth canal is always wider in females than in males. Like other mammoths, the Columbian mammoth had a high, single-domed head and a sloping back with a high shoulder hump; this shape resulted from the spinous processes (protrusions) of the back vertebrae decreasing in length from front to rear. Juveniles, though, had convex backs like Asian elephants. Other skeletal features include a short, deep rostrum (front part of the jaws), a rounded mandibular symphysis (where the two halves of the lower jaw connected) and the coronoid process of the mandible (upper protrusion of the jaw bone) extending above the molar surfaces. Apart from its larger size and more primitive molars, the Columbian mammoth also differed from the woolly mammoth by its more downturned mandibular symphysis; the dental alveoli (tooth sockets) of the tusks were directed more laterally away from the midline. Its tail was intermediate in length between that of modern elephants and the woolly mammoth. Since no Columbian mammoth soft tissue has been found, much less is known about its appearance than that of the woolly mammoth. It lived in warmer habitats than the woolly mammoth, and probably lacked many of the adaptations seen in that species. Hair thought to be that of the Columbian mammoth has been discovered in Bechan Cave in Utah, where mammoth dung has also been found. Some of this hair is coarse, and identical to that known to belong to woolly mammoths; however, since this location is so far south, it is unlikely to be woolly mammoth hair. The distribution and density of fur on the living animal is unknown, but it was probably less dense than that of the woolly mammoth due to the warmer habitat. An additional tuft of Columbian mammoth hair is known from near Castroville in California, the hair was noted to be red-orange and was described as being similar in colour to a golden retriever. Dentition Columbian mammoths had very long tusks (modified incisor teeth), which were more curved than those of modern elephants. Their tusks are among the largest recorded in proboscideans, with some reaching over in length and in weight, with some historical reports of tusks up to long and masses of around . Columbian mammoth tusks were usually not much larger than those of woolly mammoths, which reached . The tusks of females were much smaller and thinner. About a quarter of the tusks' length was inside the sockets; they grew spirally in opposite directions from the base, curving until the tips pointed towards each other, and sometimes crossed. Most of their weight would have been close to the skull, with less torque than straight tusks would have generated. The tusks were usually asymmetrical, with considerable variation; some tusks curved down, instead of outwards, or were shorter due to breakage. Columbian mammoth tusks were generally less twisted than those of woolly mammoths. At six months of age, calves developed milk tusks a few cms long, which were replaced by permanent tusks a year later. Annual tusk growth of continued throughout life, slowing as the animal reached adulthood. Columbian mammoths had four functional molar teeth at a time, two in the upper jaw and two in the lower. A mammoth's molars were replaced five times over the animal's lifetime, with a total of six succeeding molars on each half of the jaws. About of the crown was within the jaw, and was above. The teeth had separated ridges (lamellae) of enamel, which were covered in "prisms" directed towards the chewing surface. Wear-resistant, they were held together with cementum and dentin. The crowns of the lower jaw were pushed forward and up as they wore down, comparable to a conveyor belt. The first molars were about the size of those of a human, ; the third ones were long, and the sixth ones were about long and weighed . With each replacement, the molars grew larger and gained more ridges; the number of plates varied between individuals. There was typically 18-21 ridges on each third molar, similar to those of M. trogontherii, but less than the 24-28 typical of woolly mammoths. Growing of ridge took about 10.6 years. Paleobiology Like that of modern elephants, the mammoth's sensitive, muscular trunk was a limb-like organ with many functions. It was used for manipulating objects and social interaction. Although healthy adult mammoths could defend themselves from predators with their tusks, trunks, and size, juveniles and weakened adults were vulnerable to pack hunters such as wolves and big cats. Bones of juvenile Columbian mammoths, accumulated by Homotherium (the scimitar-toothed cat), have been found in Friesenhahn Cave in Texas. Tusks may have been used in intraspecies fighting for territory or mates and for display, to attract females and intimidate rivals. Two Columbian mammoths that died in Nebraska with tusks interlocked provide evidence of fighting behavior. The mammoths could use their tusks as weapons by thrusting, swiping, or crashing them down, and used them in pushing contests by interlocking them, which sometimes resulted in breakage. The tusks' curvature made them unsuitable for stabbing. Migration Although to what extent Columbian mammoths migrated is unclear, an isotope analysis of Blackwater Draw in New Mexico indicated that they spent part of the year in the Rocky Mountains, away. The study of tusk rings may aid further study of mammoth migration. On Goat Rock Beach in Sonoma Coast State Park, blueschist and chert outcrops (nicknamed "Mammoth Rocks") show evidence of having been rubbed by Columbian mammoths or mastodons. The rocks have polished areas above the ground, primarily near their edges, and are similar to African rubbing rocks used by elephants and other herbivores to rid themselves of mud and parasites. Similar rocks exist in Hueco Tanks, Texas, and on Cornudas Mountain in New Mexico. Mathematical modelling indicates that Columbian mammoths would have had to have been periodically on the move to avoid starvation, as prolonged stays in one area would rapidly exhaust the food resources necessary to sustain a population. Social behavior Like modern elephants, Columbian mammoths were probably social and lived in matriarchal (female-led) family groups; most of their other social behavior was also similar to that of modern elephants. This is supported by fossil assemblages such as the Dent site in Colorado and the Waco Mammoth National Monument in Waco, Texas, where groups consisting entirely of female and juvenile Columbian mammoths have been found (implying female-led family groups). The latter assemblage includes 22 skeletons, with 15 individuals representing a herd of females and juveniles that died in a single event. The herd was originally proposed to have been killed by a flash flood, and the arrangement of some of the skeletons suggests that the females may have formed a defensive ring around the juveniles. In 2016, the herd was suggested to have died by drought near a diminishing watering hole; scavenging traces on the bones contradict rapid burial, and the absence of calves and the large diversity of other animal species found gathered at the site support this scenario. Another group, consisting of a bull and six females, was found at the same site; although both groups died between 64,000 and 73,000 years ago, whether they died in the same event is unknown. At the Murray Springs Clovis Site in Arizona, where several Columbian mammoth skeletons have been excavated, a trackway similar to that left by modern elephants leads to one of the skeletons. The mammoth may have made the trackway before it died, or another individual may have approached the dead or dying animal—similar to the way modern elephants guard dead relatives for several days. Accumulations of modern elephant remains have been called "elephants' graveyards", because these sites were erroneously thought to be where old elephants went to die. Similar accumulations of mammoth bones have been found; these are thought to be the result of individuals dying near or in rivers over thousands of years and their bones being accumulated by the water (such as in the Aucilla River in Florida), or animals dying after becoming mired in mud. Some accumulations are thought to be the remains of herds that died at the same time, perhaps due to flooding. Columbian mammoths are occasionally preserved in volcanic deposits such as those in Tocuila, Texcoco, Mexico, where a volcanic lahar mudflow covered at least seven individuals 12,500 years ago. How many mammoths lived at one location at a time is unknown, but the number likely varied by season and lifecycle. Modern elephants can form large herds, sometimes consisting of multiple family groups, and these herds can include thousands of animals migrating together. Mammoths may have formed large herds more often than modern elephants, since animals living in open areas are more likely to do this than those in forested areas. Natural traps Many specimens also accumulated in natural traps, such as sinkholes and tar pits. The Mammoth Site in Hot Springs, South Dakota, is a 26,000-year-old, roughly -long sinkhole that functioned for 300 to 700 years before filling with sediment. The site is the opposite scenario of that in Waco; all but one of the at least 55 skeletons—additional skeletons are excavated each year—are male, and accumulated over time rather than in a single event. Like modern male elephants, male mammoths primarily are assumed to have lived alone, to be more adventurous (especially young males), and to be more likely to encounter dangerous situations than the females. The mammoths may have been lured to the hole by warm water or vegetation near the edges, slipping in and drowning or starving. Isotope studies of growth rings have shown that most of the mammoths died during spring and summer, which may have correlated with vegetation near the sinkhole. One individual, nicknamed "Murray", lies on its side, and probably died in this pose while struggling to get free. Deep footprints of mammoths attempting to free themselves from the sinkhole's mud can be seen in vertically excavated sections of the site. Since the early 20th century, excavations at the La Brea Tar Pits in Los Angeles have yielded of fossils from 600 species of flora and fauna, including several Columbian mammoths. Many of the fossils are the remains of animals that became stuck in asphalt puddles that seeped to the surface of the pits, 40,000 to 11,500 years ago. Dust and leaves likely concealed the liquid asphalt, which then trapped unwary animals. Mired animals died from hunger or exhaustion; their corpses attracted predators, which sometimes became stuck, themselves. The fossil record of the tar pits is dominated by the remains of predators, such as large canids and felids. Fossils of different animals are found stuck together when they are excavated from the pits. The tar pits do not preserve soft tissue, and a 2014 study concluded that asphalt may degrade the DNA of animals mired in it after an attempt was made to extract DNA from a Columbian mammoth. A site in an airport construction area in Mexico nicknamed "mammoth central" is believed to have been the boggy shores of an ancient lake bed where animals were trapped 10,0000 to 20,000 years ago. Human tools have been found at the site. It remains unclear whether the 200 Columbian mammoths found there died of natural causes and were then carved by humans. Some have hypothesized that humans drove the Columbian mammoths into the area to kill them. The site is only from artificial pits which were once used by humans to trap and kill large mammals. Diet An adult Columbian mammoth would have needed more than of food per day, and may have foraged for 20 hours a day. Mammoths chewed their food using their powerful jaw muscles to move the mandible forward and close the mouth, then backward while opening; the sharp enamel ridges thereby cut across each other, grinding the food. The ridges were wear-resistant, enabling the animal to chew large quantities of food that contained grit. The trunk could be used for pulling up large tufts of grass, picking buds and flowers, or tearing leaves and branches from trees and shrubs, and the tusks were used to dig up plants and strip bark from trees. Digging is indicated on preserved tusks by flat, polished sections of the surface that would have reached the ground. Isotope studies of Columbian mammoths from Mexico and the United States have shown that their diet varied by location, consisting of a mix of C3 (most plants) and C4 plants (such as grasses), and they were not restricted to grazing or browsing. Even individual Columbian mammoths from the same geographic location show major differences in dental mesowear, indicating extensive variation in dietary habits between different individuals within the same population. Evidence from Florida reveals that Columbian mammoths typically preferred C4 grasses, but that they would alter their dietary habits and consume greater proportions of non-traditional foods during periods of significant environmental change. Stomach contents from Columbian mammoths are rare, since no carcasses have been found, but plant remains were discovered between the pelvis and ribs of the "Huntington mammoth" when it was excavated in Utah. Microscopy showed that these chewed remains consisted of sedges, grasses, fir twigs and needles, oak, and maple. A large amount of mammoth dung has been found in two caves in Utah. The dry conditions and stable temperature of Bechan Cave (bechan is Navajo for "large faeces") has preserved 16,000- to 13,500-year-old elephant dung, most likely from Columbian mammoths. The dung consists of 95% grasses and sedges, and varies from 0 to 25% woody plants between dung boluses, including saltbush, sagebrush, water birch, and blue spruce. This is similar to the diet documented for the woolly mammoth, although browsing seems to have been more important for the Columbian mammoth. The cover of dung is thick, and has a volume of , with the largest boluses in diameter. The Bechan dung could have been produced by a small group of mammoths over a relatively short time, since adult African elephants drop an average of of dung every two hours and each day. Giant North American fruits of plants such as the Osage-orange, Kentucky coffeetree, pawpaw and honey locust have been proposed to have evolved in tandem with now-extinct American megafauna such as mammoths and other proboscideans, since no extant endemic herbivores are able to ingest these fruits and disperse their seeds. Introduced cattle and horses have since taken over this ecological role. Life history The lifespan of the Columbian mammoth is thought to have been about 80 years. The lifespan of a mammal is related to its size; Columbian mammoths are larger than modern elephants, which have a lifespan of about 60 years. The age of a mammoth can be roughly determined by counting the growth rings of its tusks when viewed in cross section. However, ring-counting does not account for a mammoth's early years; early growth is represented in tusk tips, which are usually worn away. In the remainder of the tusk, each major line represents a year, with weekly and daily lines found in between. Dark bands correspond to summer, making determining the season in which a mammoth died possible. Tusk growth slowed when foraging became more difficult, such as during illness or when a male mammoth was banished from the herd (male elephants live with their herds until about the age of 10). Mammoths continued growing during adulthood, as do other elephants. Males grew until age 40, and females until age 25. Mammoths may have had gestation periods of 21–22 months, like those of modern elephants. Columbian mammoths had six sets of molars in the course of a lifetime. At 6–12 months, the second set of molars would erupt, with the first set worn out at 18 months of age. The third set of molars lasted for 10 years, and this process was repeated until the sixth set emerged at 30 years of age. When the last set of molars wore out, the animal would be unable to chew, and would die of starvation. Almost all vertebrae of the "Huntington mammoth", a very aged specimen, were deformed by arthritic disease, and four of its lumbar vertebrae were fused; some bones also indicate bacterial infection, such as osteomyelitis. The condition of the bones suggests the specimen died of old age and malnutrition. Distribution and habitat Columbian mammoths inhabited much of North America, ranging from southern Canada to Central America (where it was largely confined to the vicinity of the Pacific coast), with its southernmost record being in northern Costa Rica. The environment in these areas may have had more varied habitats than those inhabited by woolly mammoths in the north (the mammoth steppe). Some areas were covered by grasses, herbaceous plants, trees, and shrubs; their composition varied from region to region, and included grassland, savanna, and aspen parkland habitats. Wooded areas also occurred; although mammoths would not have preferred forests, clearings in them could provide the animals with grasses and herbs. The Columbian mammoth shared its habitat with other now-extinct Pleistocene mammals such as Glyptotherium, the sabertooth cat Smilodon, ground sloths, the camel Camelops, mastodons, horses, and bison. It did not live in Arctic Canada or Alaska, which was inhabited by woolly mammoths. Fossils of woolly and Columbian mammoths have been found in the same place in a few areas of North America where their ranges overlapped, including the Hot Springs Site. Whether the two species were sympatric and lived there simultaneously, or if the woolly mammoths entered southern areas when Columbian mammoth populations were absent is unknown. The arrival of the Columbian mammoth in North America is thought to have resulted in the extinction of the grazing gomphothere (a relative of elephantids and mastodons) Stegomastodon around 1.2 million years ago, as a result of competitive exclusion as a result of the greater grazing efficiency of Columbian mammoths, with competition with mammoths also suggested to be a reason for the contraction of the northern part of the range, including most of its presence in the United States, of the generalist gomphothere Cuvieronius Relationship with humans Towards the end of the Late Pleistocene, around or after 16,000 years ago, Paleoindians entered the Americas through the Beringia landbridge, and evidence documents their interactions with Columbian mammoths. Tools made from Columbian mammoth remains have been discovered in several North American sites. At Tocuila, Mexico, mammoth bones were quarried 13,000 years ago to produce lithic flakes and cores. At the Lange-Ferguson Site in South Dakota, the remains of two mammoths were found with two 12,800-year-old cleaver choppers made from a mammoth shoulder blade; the choppers had been used to butcher the mammoths. At the same site, a flake knife made from a long mammoth bone was also found wedged against mammoth vertebrae. At Murray Springs, archeologists discovered a 13,100-year-old object made from a mammoth femur; the object is thought to be a shaft wrench, a tool for straightening wood and bone to make spear-shafts (the Inuit use similar tools). Although some sites potentially documenting human interactions with Columbian mammoths have been reported from as early 20,000 years ago, these have been criticised, as they lack stone tools, and the supposed human-made marks on the bones are potentially the result of natural processes. Paleoindians of the Clovis culture, which arose roughly 13,000 years ago may have been the first humans to hunt mammoths extensively. These people are thought to have hunted Columbian mammoths with Clovis pointed spears which were thrown or thrust. Although Clovis points have been found with Columbian mammoth remains at several sites, archeologists disagree about whether the finds represent hunting, scavenging dead mammoths, or are coincidental. A female mammoth at the Naco-Mammoth Kill Site in Arizona, found with eight Clovis points near its skull, shoulder blade, ribs, and other bones, is considered the most convincing evidence for hunting. In modern experiments, replica spears have been able to penetrate the rib cages of African elephants with reuse causing little damage to the points. Columbian mammoths are the animal most strongly associated with the Clovis culture, suggesting a particular importance in their lifestyle over that of other megafauna. Other sites show more circumstantial evidence of mammoth hunting, such as piled bones bearing butcher marks. Some of these sites are not closely associated with Clovis points. The Dent site (the first evidence of mammoth hunting in North America, discovered in 1932) and the Lehner Mammoth-Kill Site, where multiple juvenile and adult mammoths have been found with butcher marks and in association with Clovis points, were once interpreted as the killing of entire herds by Clovis hunters. However, isotope studies have shown that the accumulations represent individual deaths at different seasons of the year, so are not herds killed in single incidents. Many other such assemblages of bones with butcher marks may also represent accumulations over time, so are ambiguous as evidence for large-scale hunting. A 2021 article by the American paleontologist Metin I. Eren and colleagues suggested mammoths were not very susceptible to Clovis point weapons due to their thick skin, hair, muscles, ribs, and fat, which would have impeded most types of attacks humans could pull off at that time, proposing that Clovis primarily scavenged mammoths. In response, other scientists found no reason to abandon the traditional idea that Clovis points were used to hunt big-game, one suggesting that such spears could have been thrown or thrust at areas of the torso that were not protected by ribs, with the wounds not killing the mammoths instantly, but the hunters could follow their prey until it had bled to death. Isotopic analysis of Anzick-1 a young boy found buried with Clovis culture artifacts in Montana, suggests that Columbian mammoths made up around 35-40% of the diet of his mother, supporting the centrality of mammoth consumption to the lifestyle of Clovis peoples. Petroglyphs in the Colorado Plateau have been interpreted as depictions of either Columbian mammoths or mastodons. A bone fragment from Vero Beach, Florida, estimated to be 13,000-years old and possibly the earliest known example of art in the Americas, is engraved with either a mammoth or a mastodon. While the authenticity of this depiction is based on continuity of mineralisation across the markings, other possible indicators are inconclusive at present. Petroglyphs from the San Juan River in Utah have been suggested to be 11,000–13,000-years old and to include depictions of two Columbian mammoths; the mammoths' domed heads distinguish them from mastodons. They are also shown with two "fingers" on their trunks, a feature known from European depictions of mammoths. The tusks are short, which may indicate they are meant to be females. A carving of a bison (possibly the extinct Bison antiquus) is superimposed on one of the mammoth carvings and may be a later addition. Geological dating of the San Juan River depictions in 2013 have shown them to be less than 4000 years old, after mammoths and mastodons went extinct, and they may instead be an arrangement of unrelated elements. Other possible depictions of Columbian mammoths have been shown to be either misinterpretations or fraudulent. The Columbian mammoth is the state fossil of Washington and South Carolina. Nebraska's state fossil is "Archie", a Columbian mammoth specimen found in the state in 1922. "Archie" is currently on display at Elephant Hall in Lincoln, Nebraska, and is the largest mounted mammoth specimen in the United States. Extinction Columbian and woolly mammoths both disappeared from mainland North America by the latest Pleistocene, with no recorded Holocene survival, alongside most other latest Pleistocene megafauna of North America. The latest calibrated radiocarbon date of the Columbian mammoth is in the locality of the Dent site in Colorado which dates to 12,124–12,705 years Before Present, during the onset of the Younger Dryas cold phase (12,900-11,700 years BP) and Clovis culture (13,200-12,800 years BP). Its younger calibrated date compared to most other extinct latest Pleistocene species suggests that it was one of the last North American megafauna to have gone extinct. Amongst the most recent Columbian mammoth remains have been dated around 10,900 years ago, although the date is uncalibrated and therefore is actually older in age. This extinction formed part of the Late Pleistocene extinctions of North America, which coincided with both Clovis culture and the Younger Dryas. Scientists do not know whether these extinctions happened abruptly or were drawn out. During this period, 40 mammal species disappeared from North America, almost all of which weighed over ; the extinction of the mammoths cannot be explained in isolation. Scientists are divided over whether climate change, hunting, or a combination of the two, drove the extinction of the Columbian mammoths. According to the climate-change hypothesis, warmer weather led to the shrinking of suitable habitat for Columbian mammoths, which turned from parkland to forest, grassland, and semidesert, with less diverse vegetation. The "overkill hypothesis" attributes the extinction to hunting by humans, an idea first proposed by geoscientist Paul S. Martin in 1967; more recent research on this subject has varied in conclusions. A 2002 study concluded that the archeological record did not support the "overkill hypothesis", given that only 14 Clovis sites (12 with mammoth remains and two with mastodon remains) out of 76 examined provided strong evidence of hunting. In contrast, a 2007 study found that the Clovis record indicated the highest frequency of prehistoric exploitation of proboscideans for subsistence in the world, and supported the "overkill hypothesis". A 2019 study that used mathematical modelling to simulate correlations between migrations of humans and Columbian mammoths also supported the "overkill hypothesis". Whatever the actual cause of extinction, large mammals are generally more susceptible to hunting pressure than smaller ones due to their smaller population size and low reproduction rates. On the other hand, large mammals are generally less vulnerable to climatic stresses since they have greater fat deposits at their disposal and can migrate long distances to escape food shortages.
Biology and health sciences
Proboscidea
Animals
2273079
https://en.wikipedia.org/wiki/Red%20river%20hog
Red river hog
The red river hog (Potamochoerus porcus) or bushpig (a name also used for Potamochoerus larvatus) is a wild member of the pig family living in Africa, with most of its distribution in the Guinean and Congolian forests. It is rarely seen away from rainforests, and generally prefers areas near rivers or swamps. Description The red river hog has striking orange to reddish-brown fur, with black legs and a tufted white stripe along the spine. Adults have white markings around the eyes and on the cheeks and jaws; the rest of the muzzle and face are a contrasting black. The fur on the jaw and the flanks is longer than that on the body, with the males having especially prominent facial whiskers. Unlike other species of pig native to tropical Africa, the entire body is covered in hair, with no bare skin visible. Adults weigh and stand tall, with a length of . The thin tail is long and ends in a tuft of black hair. The ears are also long and thin, ending in tufts of white or black hair that may reach in length. Boars are somewhat larger than sows, and have distinct conical protuberances on either side of the snout and rather small, sharp tusks. The facial protuberances are bony and probably protect the boar's facial tendons during head-to-head combat with other males. Red river hogs have a dental formula of , similar to that of wild boar. Both sexes have scent glands close to the eyes and on the feet; males have additional glands near the tusks on the upper jaw and on the penis. There is also a distinctive glandular structure about in diameter on the chin, which probably has a tactile function. Females have six teats. Distribution and habitat The red river hog lives in rainforests, wet dense savannas, and forested valleys, and near rivers, lakes and marshes. The species' distribution ranges from the Congo area and Gambia to the eastern Congo, southwards to the Kasai and the Congo River. The exact delineation of its range versus that of the bushpig is unclear; but in broad terms, the red river hog occupies western and central Africa, and the bushpig occupies eastern and southern Africa. Where the two meet, they are sometimes said to interbreed, although other authorities dispute this. Although numerous subspecies have been identified in the past, none are currently recognised. Behaviour Red river hogs are often active during the day, but are primarily nocturnal or crepuscular. They typically live in small groups of approximately six to ten animals, composed of a single adult male, and a number of adult females and their young. However, much larger groups, some with over 30 individuals, have been noted in particularly favourable habitats. The boar defends his harem aggressively against predators, with leopards being a particularly common threat. They communicate almost continuously with grunts and squeals with a repertoire that can signal alarm, distress, or passive contact. The species is omnivorous, eating mainly roots, bulbs, and tubers, and supplements its diet with fruit, seeds, nuts, water plants, grasses, herbs, fungi, eggs, dead animal and plant remains, insects, snails, lizards, other reptiles, and domestic animals such as piglets, goats, and sheep. It uses its large muzzle to snuffle about in the soil in search of food, as well as scraping the ground with their tusks and fore-feet. They can cause damage to agricultural crops, such as cassava and yams. Reproduction Red river hogs breed seasonally, so that the young are born between the end of the dry season in February and the midpoint of the rainy season in July. The oestrus cycle lasts 34 to 37 days. The male licks the female's genital region before mating, which lasts about five to ten minutes. Gestation lasts 120 days. The mother constructs a nest from dead leaves and dry grass before giving birth to a litter of up to six piglets, with three to four being most common. The piglets weigh at birth, and are initially dark brown with yellowish stripes and spots. They are weaned after about four months, and develop the plain reddish adult coat by about six months; the dark facial markings do not appear until they reach adulthood at about two years of age. They probably live for about fifteen years in the wild.
Biology and health sciences
Pigs_2
Animals
2273378
https://en.wikipedia.org/wiki/Arene%20substitution%20pattern
Arene substitution pattern
Arene substitution patterns are part of organic chemistry IUPAC nomenclature and pinpoint the position of substituents other than hydrogen in relation to each other on an aromatic hydrocarbon. Ortho, meta, and para substitution In ortho-substitution, two substituents occupy positions next to each other, which may be numbered 1 and 2. In the diagram, these positions are marked R and ortho. In meta-substitution, the substituents occupy positions 1 and 3 (corresponding to R and meta in the diagram). In para-substitution, the substituents occupy the opposite ends (positions 1 and 4, corresponding to R and para in the diagram). The toluidines serve as an example for these three types of substitution. Synthesis Electron donating groups, for example amino, hydroxyl, alkyl, and phenyl groups tend to be ortho/para-directors, and electron withdrawing groups such as nitro, nitrile, and ketone groups, tend to be meta-directors. Properties Although the specifics vary depending on the compound, in simple disubstituted arenes, the three isomers tend to have rather similar boiling points. However, the para isomer usually has the highest melting point, and the lowest solubility in a given solvent, of the three isomers. Separation of ortho and para isomers Because electron donating groups are both ortho and para directors, separation of these isomers is a common problem in synthetic chemistry. Several methods exist in order to separate these isomers: Column chromatography will often separate these isomers, as the ortho is more polar than the para in general. Fractional crystallisation can be used to obtain pure para product, relying on the principle that it is less soluble than the ortho and thus will crystallise first. Care must be taken to avoid cocrystallisation of the ortho isomer. Many nitro compounds' ortho and para isomers have quite different boiling points. These isomers can often be separated by distillation. These separated isomers can be converted to diazonium salts and used to prepare other pure ortho or para compounds. Ipso, meso, and peri substitution Ipso-substitution describes two substituents sharing the same ring position in an intermediate compound in an electrophilic aromatic substitution. Trimethylsilyl, tert-butyl, and isopropyl groups can form stable carbocations, hence are ipso directing groups. Meso-substitution refers to the substituents occupying a benzylic position. It is observed in compounds such as calixarenes and acridines. Peri-substitution occurs in naphthalenes for substituents at the 1 and 8 positions. Cine and tele substitution In cine-substitution, the entering group takes up a position adjacent to that occupied by the leaving group. For example, cine-substitution is observed in aryne chemistry. Tele-substitution occurs when the new position is more than one atom away on the ring. Origins The prefixes ortho, meta, and para are all derived from Greek, meaning correct, following, and beside, respectively. The relationship to the current meaning is perhaps not obvious. The ortho description was historically used to designate the original compound, and an isomer was often called the meta compound. For instance, the trivial names orthophosphoric acid and trimetaphosphoric acid have nothing to do with aromatics at all. Likewise, the description para was reserved for just closely related compounds. Thus Jöns Jakob Berzelius originally called the racemic form of tartaric acid "paratartaric acid" (another obsolete term: racemic acid) in 1830. The use of the prefixes ortho, meta and para to distinguish isomers of disubstituted aromatic rings starts with Wilhelm Körner in 1867, although he applied the ortho prefix to a 1,4-isomer and the meta prefix to a 1,2-isomer. It was the German chemist Karl Gräbe who, in 1869, first used the prefixes ortho-, meta-, para- to denote specific relative locations of the substituents on a disubstituted aromatic ring (namely naphthalene). In 1870, the German chemist Viktor Meyer first applied Gräbe's nomenclature to benzene. The current nomenclature was introduced by the Chemical Society in 1879. Examples Examples of the use of this nomenclature are given for isomers of cresol, C6H4(OH)(CH3): There are three arene substitution isomers of dihydroxybenzene (C6H4(OH)2) – the ortho isomer catechol, the meta isomer resorcinol, and the para isomer hydroquinone: There are three arene substitution isomers of benzenedicarboxylic acid (C6H4(COOH)2) – the ortho isomer phthalic acid, the meta isomer isophthalic acid, and the para isomer terephthalic acid: These terms can also be used in six-membered heterocyclic aromatic systems such as pyridine, where the nitrogen atom is considered one of the substituents. For example, nicotinamide and niacin, shown meta substitutions on a pyridine ring, while the cation of pralidoxime is an ortho isomer.
Physical sciences
Aromatic hydrocarbons
Chemistry
2273604
https://en.wikipedia.org/wiki/Coordination%20geometry
Coordination geometry
The coordination geometry of an atom is the geometrical pattern defined by the atoms around the central atom. The term is commonly applied in the field of inorganic chemistry, where diverse structures are observed. The coordination geometry depends on the number, not the type, of ligands bonded to the metal centre as well as their locations. The number of atoms bonded is the coordination number. The geometrical pattern can be described as a polyhedron where the vertices of the polyhedron are the centres of the coordinating atoms in the ligands. The coordination preference of a metal often varies with its oxidation state. The number of coordination bonds (coordination number) can vary from two in as high as 20 in . One of the most common coordination geometries is octahedral, where six ligands are coordinated to the metal in a symmetrical distribution, leading to the formation of an octahedron if lines were drawn between the ligands. Other common coordination geometries are tetrahedral and square planar. Crystal field theory may be used to explain the relative stabilities of transition metal compounds of different coordination geometry, as well as the presence or absence of paramagnetism, whereas VSEPR may be used for complexes of main group element to predict geometry. Crystallography usage In a crystal structure the coordination geometry of an atom is the geometrical pattern of coordinating atoms where the definition of coordinating atoms depends on the bonding model used. For example, in the rock salt ionic structure each sodium atom has six near neighbour chloride ions in an octahedral geometry and each chloride has similarly six near neighbour sodium ions in an octahedral geometry. In metals with the body centred cubic (bcc) structure each atom has eight nearest neighbours in a cubic geometry. In metals with the face centred cubic (fcc) structure each atom has twelve nearest neighbours in a cuboctahedral geometry. Table of coordination geometries A table of the coordination geometries encountered is shown below with examples of their occurrence in complexes found as discrete units in compounds and coordination spheres around atoms in crystals (where there is no discrete complex). Naming of inorganic compounds IUPAC have introduced the polyhedral symbol as part of their IUPAC nomenclature of inorganic chemistry 2005 recommendations to describe the geometry around an atom in a compound. IUCr have proposed a symbol which is shown as a superscript in square brackets in the chemical formula. For example, would be Ca[8cb]F2[4t], where [8cb] means cubic coordination and [4t] means tetrahedral. The equivalent symbols in IUPAC are CU−8 and T−4 respectively. The IUPAC symbol is applicable to complexes and molecules whereas the IUCr proposal applies to crystalline solids.
Physical sciences
Bond structure
Chemistry
2274197
https://en.wikipedia.org/wiki/Stegoceras
Stegoceras
Stegoceras is a genus of pachycephalosaurid (dome-headed) dinosaur that lived in what is now North America during the Late Cretaceous period, about 77.5 to 74 million years ago (mya). The first specimens from Alberta, Canada, were described in 1902, and the type species Stegoceras validum was based on these remains. The generic name means "horn roof", and the specific name means "strong". Several other species have been placed in the genus over the years, but these have since been moved to other genera or deemed junior synonyms. Currently only S. validum and S. novomexicanum, named in 2011 from fossils found in New Mexico, remain. The validity of the latter species has also been debated, and it may not even belong to the genus Stegoceras. Stegoceras was a small, bipedal dinosaur about long, and weighed around . The skull was roughly triangular with a short snout, and had a thick, broad, and relatively smooth dome on the top. The back of the skull had a thick "shelf" over the occiput, and it had a thick ridge over the eyes. Much of the skull was ornamented by tubercles (or round "outgrowths") and nodes (or "knobs"), many in rows, and the largest formed small horns on the shelf. The teeth were small and serrated. The skull is thought to have been flat in juvenile animals and to have grown into a dome with age. It had a rigid vertebral column, and a stiffened tail. The pelvic region was broad, perhaps due to an extended gut. Originally known only from skull domes, Stegoceras was one of the first known pachycephalosaurs, and the incompleteness of these initial remains led to many theories about the affinities of this group. A complete Stegoceras skull with associated parts of the skeleton was described in 1924, which shed more light on these animals. Pachycephalosaurs are today grouped with the horned ceratopsians in the group Marginocephalia. Stegoceras itself has been considered basal (or "primitive") compared to other pachycephalosaurs. Stegoceras was most likely herbivorous, and it probably had a good sense of smell. The function of the dome has been debated, and competing theories include use in intra-specific combat (head or flank-butting), sexual display, or species recognition. S. validum is known from the Dinosaur Park Formation and the Oldman Formation, whereas the controversial S. novomexicanum is known from the Fruitland and Kirtland Formation. History of discovery The first known remains of Stegoceras were collected by Canadian palaeontologist Lawrence Lambe from the Belly River Group, in the Red Deer River district of Alberta, Canada. These remains consisted of two partial skull domes (specimens CMN 515 and CMN 1423 in the Canadian Museum of Nature) from two animals of different sizes collected in 1898, and a third partial dome (CMN 1594) collected in 1901. Based on these specimens, Lambe described and named the new monotypic genus and species Stegoceras validus in 1902. The generic name Stegoceras comes from the Greek stegè/στέγη, meaning "roof" and keras/κέρας meaning "horn". The specific name validus means "strong" in Latin, possibly in reference to the thick skull-roof. Because the species was based on multiple specimens (a syntype series), CMN 515 was designated as the lectotype specimen by John Bell Hatcher in 1907. As no similar remains had been found in the area before, Lambe was unsure of what kind of dinosaur they were, and whether they represented one species or more; he suggested the domes were "prenasals" situated before the nasal bones on the midline of the head, and noted their similarity to the nasal horn-core of a Triceratops specimen. In 1903, Hungarian palaeontologist Franz Nopcsa von Felső-Szilvás suggested that the fragmentary domes of Stegoceras were in fact frontal and nasal bones, and that the animal would therefore have had a single, unpaired horn. Lambe was sympathetic to this idea of a new type of "unicorn dinosaur" in a 1903 review of Nopscsa's paper. At this time, there was still uncertainty over which group of dinosaur Stegoceras belonged to, with both ceratopsians (horned dinosaurs) and stegosaurs (plated dinosaurs) as contenders. Hatcher doubted whether the Stegoceras specimens belonged to the same species and whether they were dinosaurs at all, and suggested the domes consisted of the frontal, occipital, and parietal bones of the skull. In 1918, Lambe referred another dome (CMN 138) to S. validus, and named a new species, S. brevis, based on specimen CMN 1423 (which he originally included in S. validus). By this time, he considered these animals as members of Stegosauria (then composed of both families of armoured dinosaurs, Stegosauridae and Ankylosauridae), in a new family he called Psalisauridae (named for the vaulted or dome-shaped skull roof). In 1924, the American palaeontologist Charles W. Gilmore described a complete skull of S. validus with associated postcranial remains, by then the most complete remains of a dome-headed dinosaur. It was discovered in the Belly River Group by the American palaeontologist George F. Sternberg in 1926, and catalogued as specimen UALVP 2 in the University of Alberta Laboratory for Vertebrate Palaeontology. This find confirmed Hatcher's interpretation of the domes as consisting of the frontoparietal area of the skull. UALVP 2 was found with small, disarticulated bony elements, then thought to be gastralia (abdominal ribs), which are not known in other ornithischian dinosaurs (one of the two main groups of dinosaurs). Gilmore pointed out that the teeth of S. validus were very similar to those of the species Troodon formosus (named in 1856 and by then only known from isolated teeth), and described a skull dome discovered close to the locality where Troodon was found. Therefore, Gilmore considered Stegoceras an invalid junior synonym of Troodon, thereby renaming S. validus into T. validus, and suggested that even the two species might be the same. Furthermore, he found S. brevis to be identical to S. validus, and therefore a junior synonym of the latter. He also placed these species in the new family Troodontidae (since Lambe had not selected a type genus for his Psalisauridae), which he considered closest to the ornithopod dinosaurs. Because the skull seemed so specialized compared to the rather "primitive"-looking skeleton, Nopcsa doubted whether these parts actually belonged together, and suggested the skull belonged to a nodosaur, the skeleton to an ornithopod, and the supposed gastralia (belly ribs) to a fish. This claim was rebutted by Gilmore and Loris S. Russell in the 1930s. Gilmore's classification was supported by the American palaeontologists Barnum Brown and Erich Maren Schlaikjer in their 1943 review of the dome-headed dinosaurs, by then known from 46 skulls. From these specimens, Brown and Schlaikjer named the new species T. sternbergi and T. edmontonensis (both from Alberta), as well as moving the large species T. wyomingensis (which was named in 1931) to the new genus Pachycephalosaurus, along with two other species. They found T. validus distinct from T. formosus, but considered S. brevis the female form of T. validus, and therefore a junior synonym. By this time, the dome-headed dinosaurs were either considered relatives of ornithopods or of ankylosaurs. In 1945, after examining casts of T. formosus and S. validus teeth, the American palaeontologist Charles M. Sternberg demonstrated differences between the two, and instead suggested that Troodon was a theropod dinosaur, and that the dome-headed dinosaurs should be placed in their own family. Though Stegoceras was the first member of this family to be named, Sternberg named the group Pachycephalosauridae after the second genus, as he found that name (meaning "thick head lizard") more descriptive. He also considered T. sternbergi and T. edmontonensis members of Stegoceras, found S. brevis valid, and named a new species, S. lambei, based on a specimen formerly referred to S. validus. The split from Troodon was supported by Russell in 1948, who described a theropod dentary with teeth almost identical to those of T. formosus. In 1953, Birger Bohlin named Troodon bexelli based on a parietal bone from China. In 1964, Oskar Kuhn considered this as an unequivocal species of Stegoceras; S. bexelli. In 1974, the Polish palaeontologists Teresa Maryańska and Halszka Osmólska concluded that the "gastralia" of Stegoceras were ossified tendons, after identifying such structures in the tail of the pachycephalosaur Homalocephale. In 1979, William Patrick Wall and Peter Galton named the new species Stegoceras browni, based on a flattened dome, formerly described as a female S. validus by Galton in 1971. The specific name honours Barnum Brown, who found the holotype specimen (specimen AMNH 5450 in the American Museum of Natural History) in Alberta. In 1983, Galton and Hans-Dieter Sues moved S. browni to its own genus, Ornatotholus (ornatus is Latin for "adorned" and tholus for "dome"), and considered it the first known American member of a group of "flat-headed" pachycephalosaurs, previously known from Asia. In a 1987 review of the pachycephalosaurs, Sues and Galton emended the specific name validus to validum, which has subsequently been used in the scientific literature. These authors synonymized S. brevis, S. sternbergi, and S. lambei with S. validum, found that S. bexelli differed from Stegoceras in several features, and considered it an indeterminate pachycephalosaur. In 1998, Goodwin and colleagues considered Ornatotholus a juvenile S. validum, therefore a junior synonym. 21st century developments In 2000, Robert M. Sullivan referred S. edmontonensis and S. brevis to the genus Prenocephale (until then only known from the Mongolian species P. prenes), and found it more likely that S. bexelli belonged to Prenocephale than to Stegoceras, but considered it a nomen dubium (dubious name, without distinguishing characters) due to its incompleteness, and noted its holotype specimen appeared to be lost. In 2003, Thomas E. Williamson and Thomas Carr considered Ornatotholus a nomen dubium, or perhaps a juvenile Stegoceras. In a 2003 revision of Stegoceras, Sullivan agreed that Ornatotholus was a junior synonym of Stegoceras, moved S. lambei to the new genus Colepiocephale, and S. sternbergi to Hanssuesia. He stated that the genus Stegoceras had become a wastebasket taxon for small to medium-sized North American pachycephalosaurs until that point. By this time, dozens of specimens had been referred to S. validum, including many domes too incomplete to be identified as Stegoceras with certainty. UALVP 2 is still the most complete specimen of Stegoceras, upon which most scientific understanding of the genus is based. S. brevis was moved to the new genus Foraminacephale in 2016 by Ryan K. Schott Schott and David C. Evans, and S. bexelli to Sinocephale in 2021 by Evans and colleagues. In 2023, Aaron D. Dyer and colleagues analysed sutures and individual elements in the skulls of the pachycephalosaurs Gravitholus and Hanssuesia, and found no significant distinction between them and Stegoceras validum. They considered both as junior synonyms, with Gravitholus representing the end-stage in the growth of Stegoceras. In 2002, Williamson and Carr described a dome (specimen NMMNH P-33983 in the New Mexico Museum of Natural History and Science) from the San Juan Basin, New Mexico, which they considered a juvenile pachycephalosaur of uncertain species (though perhaps Sphaerotholus goodwini). In 2006, Sullivan and Spencer G. Lucas considered it a juvenile S. validum, which would expand the range of the species considerably. In 2011, Steven E. Jasinski and Sullivan considered the specimen an adult, and made it the holotype of the new species Stegoceras novomexicanum, with two other specimens (SMP VP-2555 and SMP VP-2790) as paratypes. A 2011 phylogenetic analysis by Watabe and colleagues did not place the two Stegoceras species close to each other. In 2016, Williamson and Stephen L. Brusatte restudied the holotype of S. novomexicanum and found that the paratypes did not belong to the same taxon as the holotype, and that all the involved specimens were juveniles. Furthermore, they were unable to determine whether the holotype specimen represented the distinct species S. novomexicanum, or if it was a juvenile of either S. validum or Sphaerotholus goodwini, or another previously known pachycephalosaur. In 2016, Jasinski and Sullivan defended the validity of S. novomexicanum; they agreed that some features used to diagnose the species were indicative of a sub-adult stage, but presented additional diagnostic features in the holotype that distinguish the species. They also pointed out some adult features, which may indicate heterochrony (difference in timing of ontogenetic changes between related taxa) in the species. They conceded that the paratypes and other assigned specimens differed from the holotype in having more highly domed skulls, instead referring to them as cf. S. novomexicanum (difficult to identify), but found it likely they all belonged to the same taxon (with the assigned specimens being adults), due to the restricted stratigraphic interval and geographic range. Dyer and colleagues found that the S. novomexicanum holotype could be an immature Sphaerotholus goodwini, because the proposed unique traits of S. novomexicanum disappeared through ontogeny in S. validum. In 2024, a specimen of Stegoceras from the Aguja Formation was described, and assigned to Stegoceras based on morphometric analyses. It was a juvenile, very comparable to juveniles of S. validum, but different in some aspects. They considered it a possible representative of a new southern species of Stegoceras, but not S. novomexicanum, since the study concluded it was very dissimilar from other Stegoceras specimens and therefore probably not referable to Stegoceras. The description also included the holotype of the dubious species Texacephale langstoni in its morphometric analysis, where it was also found to be very similar to S. validum but not to the extent to which the authors of the study outright referred it to that species. Nevertheless, the authors of the study considered that the holotype of Texacephale was probably an adult specimen of the genus Stegoceras. Description Stegoceras is one of the most completely known North American pachycephalosaurs, and one of the few known from postcranial remains; S. validum specimen UALVP 2 is the most complete Stegoceras individual known to date. Its length is estimated to have been about , comparable to the size of a goat. The weight has been estimated to be about . Stegoceras was small to medium in size compared to other pachycephalosaurs. S. novomexicanum appears to have been smaller than S. validum, but it is disputed whether the known specimens (incomplete skulls) are adults or juveniles. Skull and dentition The skull of Stegoceras was roughly triangular in shape when viewed from the side, with a relatively short snout. The frontal and parietal bones were very thick and formed an elevated dome. The suture between these two elements was obliterated (only faintly visible in some specimens), and they are collectively termed the "frontoparietal". The frontoparietal dome was broad and had a relatively smooth surface, with only the sides being rugose (wrinkled). It was narrowed above and between the orbita (eye sockets). The frontoparietal narrowed at the back, was wedged between the squamosal bones, and ended in a depression above the at the back of the skull. The parietal and squamosal bones formed a thick shelf over the occiput termed the parietosquamosal shelf, whose extent varied between specimens. The squamosal was large, not part of the dome, and the back part was swollen. It was ornamented by irregularly spaced tubercles (or round outgrowths), and a row of nodes (knobs) extended along its upper edges, ending in a pointed tubercle (or small horn) on each side at the back of the skull. An inner row of smaller tubercles ran parallel with the larger one. Except for the upper surface of the dome, much of the skull was ornamented with nodes, many arranged in rows. The large orbit was shaped like an imperfect ellipse (with the longest axis from front to back), and faced to the side and slightly forward. The infratemporal fenestra (opening) behind the eye was narrow and sloped backwards, and the supratemporal fenestra on the top back of the skull was very reduced in size, due to the thickening of the frontoparietal. The (floor of the ) was shortened and distanced from the regions below the orbits and around the palate. The occiput sloped backwards and down, and the occipital condyle was deflected in the same direction. The lacrimal bone formed the lower front margin of the orbit, and its surface had rows of node-like ornamentation. The prefrontal and palpebral bones were fused and formed a thick ridge above the orbit. The relatively large jugal bone formed the lower margin of the orbit, extending far forwards and down towards the jaw joint. It was ornamented with ridges and nodes in a radiating arrangement. The nasal openings were large and faced frontwards. The nasal bone was thick, heavily sculpted, and had a convex profile. It formed a boss (shield) on the middle top of the skull together with the frontal bone. The lower front of the premaxilla (front bone of the upper jaw) was rugose and thickened. A small foramen (hole) was present in the suture between the premaxillae, leading into the nasal cavity, and possibly connected to the Jacobson's organ (an olfactory sense organ). The maxilla was short and deep, and probably contained a sinus. The maxilla had a series of foramina that corresponded with each tooth position there, and these functioned as passages for erupting replacement teeth. The mandible articulated with the skull below the back of the orbit. The tooth-bearing part of the lower jaw was long, with the part behind being rather short. Though not preserved, the presence of a predentary bone is indicated by facets at the front of the lower jaw. Like other pachycephalosaurs, it would have had a small beak. Stegoceras had teeth that were heterodont (differentiated) and (placed in sockets). It had marginal rows of relatively small teeth, and the rows did not form a straight cutting edge. The teeth were set obliquely along the length of the jaws, and overlapped each other slightly from front to back. On each side, the most complete specimen (UALVP 2) had three teeth in the premaxilla, sixteen in the maxilla (both part of the upper jaw), and seventeen in the dentary of the lower jaw. The teeth in the premaxilla were separated from those behind in the maxilla by a short diastema (space), and the two rows in the premaxilla were separated by a toothless gap at the front. The teeth in the front part of the upper jaw (premaxilla) and front lower jaw were similar; these had taller, more pointed and recurved crowns, and a "heel" at the back. The front teeth in the lower jaw were larger than those of the upper jaw. The front edges of the crowns bore eight denticles (serrations), and the back edge bore nine to eleven. The teeth in the back of the upper (maxilla) and lower jaw were triangular in side view and compressed in front view. They had long roots that were oval in section, and the crowns had a marked at their bases. The denticles here were compressed and directed towards the top of the crowns. Both the outer and inner side of the tooth crowns bore enamel, and both sides were divided vertically by a ridge. Each edge had about seven or eight denticles, with the front edge usually having the most. The skull of Stegoceras can be distinguished from those of other pachycephalosaurs by features such as its pronounced parietosquamosal shelf (though this became smaller with age), the "incipient" doming of its frontopariental (though the doming increased with age), its inflated nasal bones, its ornamentation of tubercles on the sides and back of the squamosal bones, rows of up to six tubercles on the upper side of each squamosal, and up to two nodes on the backwards projection of the parietal. It is also distinct in its lack of nasal ornamentation, and in having a reduced diastema. The skull of S. novomexicanum can be distinguished from that of S. validum in features such as the backwards extension of the parietal bone being more reduced and triangular, having larger supratemporal fenestrae (though this may be due to the possible juvenile status of the specimens), and having roughly parallel suture contacts between the squamosal and parietal. It also appears to have had a smaller frontal boss than S. validum, and seems to have been more gracile overall. Postcranial skeleton The vertebral column of Stegoceras is incompletely known. The articulation between the zygagophyses (articular processes) of successive dorsal (back) vertebrae appears to have prevented sideways movement of the vertebral column, which made it very rigid, and it was further strengthened by . Though the neck vertebrae are not known, the downturned occipital condyle (which articulates with the first neck vertebra) indicates that the neck was held in a curved posture, like the "S"- or "U"-shape of most dinosaur necks. Based on their position in Homalocephale, the ossified tendons found with UALVP 2 would have formed an intricate "" in the tail, consisting of parallel rows, with the extremities of each tendon contacting the next successively. Such structures are called , and are otherwise only known in teleost fish; the feature is unique to pachycephalosaurs among tetrapod (four-limbed) animals, and may have functioned in stiffening the tail. The scapula (shoulder blade) was longer than the humerus (upper arm bone); its blade was slender and narrow, and slightly twisted, following the contour of the ribs. The scapula did not expand at the upper end but was very expanded at the base. The coracoid was mainly thin and plate-like. The humerus had a slender shaft, was slightly twisted along its length, and was slightly bowed. The (where the deltoid and pectoral muscles attached) was weakly developed. The ends of the ulna were expanded, and ridges extended along the shaft. The radius was more robust than the ulna, which is unusual. When seen from above, the pelvic girdle was very broad for a bipedal archosaur, and became wider towards the hind part. The broadness of the pelvic region may have accommodated a rear extension of the gut. The ilium was elongated and the ischium was long and slender. Though the pubis is not known, it was probably reduced in size like that of Homalocephale. The femur (thigh bone) was slender and inwards curved, the tibia was slender and twisted, and the fibula was slender and wide at the upper end. The metatarsus of the foot appears to have been narrow, and the single known ungual (claw bone) of a toe was slender and slightly curved. Though the limbs of Stegoceras are not completely known, they were most likely like other pachycephalosaurs in having five-fingered hands and four toes. Classification During the 1970s, more pachycephalosaur genera were described from Asian fossils, which provided more information about the group. In 1974, Maryańska and Osmólska concluded that pachycephalosaurs are distinct enough to warrant their own suborder within Ornithischia, Pachycephalosauria. In 1978, the Chinese palaeontologist Dong Zhiming split Pachycephalosauria into two families; the dome-headed Pachycephalosauridae (including Stegoceras) and the flat-headed Homalocephalidae (originally spelled Homalocephaleridae). Wall and Galton did not find suborder status for the pachycephalosaurs justified in 1979. By the 1980s, the affinities of the pachycephalosaurs within Ornithischia were unresolved. The main competing views were that the group was closest to either ornithopods or ceratopsians, the latter view due to similarities between the skeleton of Stegoceras and the "primitive" ceratopsian Protoceratops. In 1986, American palaeontologist Paul Sereno supported the relationship between pachycephalosaurs and ceratopsians, and united them in the group Marginocephalia, based on similar cranial features, such as the "shelf"-structure above the occiput. He conceded that the evidence for this grouping was not overwhelming, but the validity of the group was supported by Sues and Galton in 1987. By the early 21st century, few pachycephalosaur genera were known from postcranial remains, and many taxa were only known from domes, which made classification within the group difficult. Pachycephalosaurs are thus mainly defined by cranial features, such as the flat to domed frontoparietal, the broad and flattened bar along the postorbital and squamosal bones, and the squamosal bones being deep plates on the occiput. In 1986, Sereno had divided the pachycephalosaurs into different groups based on the extent of the doming of their skulls (grouped in now invalid taxa such as "Tholocephalidae" and "Domocephalinae"), and in 2000 he considered the "partially" domed Stegoceras a transition between the supposedly "primitive" flat-headed and advanced "fully" domed genera (such as Pachycephalosaurus). The dome-headed/flat-headed division of the pachycephalosaurs was abandoned in the following years, as flat heads were considered paedomorphic (juvenile-like) or derived traits in most revisions, but not a sexually dimorphic trait. In 2006, Sullivan argued against the idea that the extent of doming was useful in determining taxonomic affinities between pachycephalosaurs. In 2003, Sullivan found Stegoceras itself to be more basal (or "primitive") than the "fully-domed" members of the subfamily Pachycephalosaurinae, elaborating on conclusions reached by Sereno in 1986. A 2013 phylogenetic analysis by Evans and colleagues found that some flat-headed pachycephalosaur genera were more closely related to "fully" domed taxa than to the "incompletely" domed Stegoceras, which suggests they represent juveniles of domed taxa, and that flat heads do not indicate taxonomic affinities. The cladogram below shows the placement of Stegoceras within Pachycephalosauridae according to Schott and colleagues, 2016: The biogeography and early evolutionary history of pachycephalosaurs is poorly understood, and can only be clarified by new discoveries. Pachycephalosaurs appear abruptly in the fossil record, and are present in both North America and Asia, so it is unknown when they first originated, and from which direction they dispersed. The oldest known members of the group (such as Acrotholus) are "fully domed" and known from the Santonian stage of the Late Cretaceous period (about 84 million years ago). This is before the supposedly more primitive Stegoceras from the Middle Campanian (77 million years ago) and Homalocephale from the Early Maastrichtian (70 million years ago), so the doming of the skull may be a homoplastic trait (a form of convergent evolution). The late occurrence of pachycephalosaurs compared to the related ceratopsians indicates a long ghost lineage (inferred, but missing from the fossil record) spanning 66 million years, from the Late Jurassic to the Cretaceous. Since pachycephalosaurs were mainly small, this may be due to taphonomic bias; smaller animals are less likely to be preserved through fossilisation. More delicate bones are also less likely to be preserved, which is why pachycephalosaurs are mainly known from their robust skulls. Palaeobiology Feeding mechanics It is uncertain what pachycephalosaurs ate; having very small, ridged teeth they could not have chewed tough, fibrous plants as effectively as other dinosaurs of the same period. It is assumed that their sharp, serrated teeth were ideally suited for a mixed diet of leaves, seeds, fruit and insects. Stegoceras may have had an entirely herbivorous diet, as the tooth crowns were similar to those of iguanid lizards. The premaxillary teeth show wear facets from contact with the predentary bone, and the maxillary teeth have double wear facets similar to those seen in other ornithischian dinosaurs. Every third maxillary tooth of UALVP 2 are erupting replacement teeth, and tooth replacement happened in backwards progression in sequential threes. The occipital region of Stegoceras was well-demarcated for muscle-attachment and it is believed that the jaw movement of Stegoceras and other pachycephalosaurs was mostly limited to up-and-down motions with only a slight capability for jaw rotation. This is based on the structure of the jaw and dental microwear and wear facets of the teeth indicate that the bite-force was used more for shearing than for crushing. In 2021, the Canadian palaeontologist Michael N. Hudgins and colleagues examined the teeth of Stegoceras and Thescelosaurus and found that while both had heterodont teeth, they could be statistically distinguished from each other. Due to its broad rostrum and more uniform teeth, Stegoceras was an indiscriminate bulk-feeder that cropped large amounts of vegetation, while the teeth and narrow rostrum of Thescelosaurus indicates it was a selective feeder. Pachycephalosaurs and Thescelosaurids occur in the same North American formations, and it appears that their coexistence was made possible by them occupying different ecomorphospaces (though Stegoceras and Thescelosaurus themselves were not contemporaries). Nasal passages In 1989, Emily B. Griffin found that Stegoceras and other pachycephalosaurs had a good sense of smell (olfaction), based on the study of cranial endocasts that showed large olfactory bulbs in the brain. In 2014, Jason M. Bourke and colleagues found that Stegoceras would have needed cartilaginous nasal turbinates in the front of the nasal passages for airflow to reach the olfactory region. Evidence for the presence of this structure is a bony ridge to which it could have attached. The size of the olfactory region also indicates that Stegoceras had a keen sense of smell. The researchers found that the dinosaur could have had either a scroll-shaped turbinate (like in a turkey) or a branched one (as in an ostrich) as both could have directed air to the olfactory region. The blood vessel system in the passages also suggest that the turbinates served to cool down warm arterial blood from the body that was heading to the brain. The skull of S. validum specimen UALVP 2 was suited for a study of this kind due to its exceptional preservation; it has ossified soft tissue in the nasal cavity, which would otherwise be cartilaginous and therefore not preserved through mineralization. Ontogenetic changes Several explanations have historically been proposed for the variation seen in the skulls of Stegoceras and other pachycephalosaurs. Brown and Schlaikjer suggested that there was sexual dimorphism in the degree of doming, and hypothesized that flat-headed specimens such as AMNH 5450 (Ornatotholus) represented the female morph of Stegoceras. This idea was supported by a 1981 morphometric study by Champan and colleagues, which found that males had larger and thicker domes. After other flat-headed pachycephalosaurs were discovered, the degree of doming was proposed to be a feature with taxonomic importance, and AMNH 5450 was therefore considered a distinct taxon from 1979 onwards. In 1998, Goodwin and colleagues instead proposed that the inflation of the dome was an ontogenetic feature that changed with age, based on a histological study of an S. validum skull that showed the dome consisted of vascular, fast-growing bone, consistent with an increase in doming through age. These authors found that the supposedly distinct features of Ornatotholus could easily be the results of ontogeny. In 2003, Williamson and Carr published a hypothetical growth series of S. validum, showing Ornatotholus as the juvenile stage. They suggested that juveniles were characterized by a flat, thickened frontoparietal roof, with larger supratemporal fenestrae, and studded with closely spaced tubercles and nodes. The parietosquamosal shelf was not reduced in size, and the frontoparietal suture was open. Sub-adults had mound-like domes, with the back part of the parietal and skull-roof being flat. The supratemporal fenestrae showed asymmetry in size, and the closure of the frontoparietal suture was variable. The nodes were stretched or almost obliterated as the dome expanded during growth, with a tesserated surface remaining. The pattern was often obliterated at the highest point (apex) of the dome, the area where maximum expansion occurred. The tubercles on the skull were stretched in different directions, and those at the margin of the parietosquamosal shelf may have been hypertrophied (enlarged) tubercles. The back and sides of sub-adult and adult skulls were ornamented by less modified tubercles. Before being incorporated into the enlarging dome, the skull bones expanded, resulting in junctions between these bones. The adult dome was broad and convex, and incorporated most of the shelf, which was reduced in size and overhung the occiput as a thick "lip". The supratempooral fenestrae were closed, but the suture between the frontoparietal and connected skull bones was not always closed in adults and subadults. In 2011, Schott and colleagues made a more comprehensive analysis of cranial dome ontogeny in S. validum. The study found that the parietosquamosal shelf conserved the arrangement of ornamentation throughout growth, and that vascularity of the frontoparietal domes decreased with size. It also found that dome shape and size was strongly correlated with growth, and that growth was allometric (in contrast to isometric) from flat to domed, supporting Ornatotholus as a juvenile Stegoceras. They also hypothesized that this model of dome growth, with dramatic changes from juvenile to adult, was the common developmental trajectory of pachycephalosaurs. These researchers noted that though Williamson and Carr's observation that the supratemporal fenestrae closed with age was generally correct, there was still a high degree of individual variation in the size of these fenestrae, regardless of the size of the frontoparietal, and this feature may therefore have been independent of ontogeny. A 2012 study by Schott and Evans found that the number and shape of the individual nodes on the squamosal shelf of the examined S. validum skulls varied considerably, and that this variability does not seem to correlate with ontogenic changes, but was due to individual variation. These researchers found no correlation between the width of supratemporal fenestrae and the size of the squamosal. Dome function The function of pachycephalosaur domes has been debated, and Stegoceras has been used as a model for experimentation in various studies. The dome has mainly been interpreted as a weapon used in intra-specific combat, a sexual display structure, or a means for species recognition. Combat The hypothesis that the domed skulls of Stegoceras and other pachycephalosaurs were used for butting heads was first suggested by American palaeontologist Edwin Colbert in 1955. In 1970 and 1971, Galton elaborated on this idea, and argued that if the dome was simply ornamental, it would have been less dense, and that the structure was ideal for resisting force. Galton suggested that when Stegoceras held its skull vertically, perpendicular to the neck, force would be transmitted from the skull, with little chance of it being dislocated, and the dome could therefore be used as a battering-ram. He believed it was unlikely to have been used mainly as defence against predators, because the dome itself lacked spikes, and those of the parietosquamosal shelf were in an "ineffective" position, but found it compatible with intra-specific competition. Galton imagined the domes were bashed together, while the vertebral column was held in a horizontal position. This could either be done while facing each other while dealing blows, or while charging each other with lowered heads (analogous to modern sheep and goats). He also noted that the rigidity of the back would have been useful when using the head for this purpose. In 1978, Sues agreed with Galton that the anatomy of pachycephalosaurs was consistent with transmitting dome-to-dome impact stress, based on tests with plexi-glass models. The impact would be absorbed through the neck and body, and neck ligaments and muscles would prevent injuries by glancing blows (as in modern bighorn sheep). Sues also suggested that the animals could have butted each other's flanks. In 1997, the American palaeontologist Kenneth Carpenter pointed out that the dorsal vertebrae from the back of the pachycephalosaur Homalocephale show that the back curved downwards just before the neck (which was not preserved), and unless the neck curved upwards, the head would point to the ground. He therefore inferred that the necks of Stegoceras and other pachycephalosaurs were held in a curved posture (as is the norm in dinosaurs), and that they would therefore not have been able to align their head, neck, and body horizontally straight, which would be needed to transmit stress. Their necks would have to be held below the level of the back, which would have risked damaging the spinal cord on impact. Modern bighorn sheep and bison overcome this problem by having strong ligaments from the neck to the tall neural spines over the shoulders (which absorb the force of impact), but such features are not known in pachycephalosaurs. These animals also absorb the force of impact through sinus chambers at the base of their horns, and their foreheads and horns form a broad contact surface, unlike the narrow surface of pachycephalosaur domes. Because the dome of Stegoceras was rounded, it would have given a very small area for potential impact, and the domes would have glanced off each other (unless the impact was perfectly centred). Combating pachycephalosaurs would have had difficulty seeing each other while their heads were lowered, due to the bony ridges above the eyes. Because of the problems he found with the head-butting hypothesis, Carpenter instead suggested the domes were adaptations for flank-butting (as seen in some large African mammals); he imagined that two animals would stand parallel, facing each other or the same direction, and direct blows to the side of the opponent. The relatively large body width of pachycephalosaurs may consequently have served to protect vital organs from harm during flank-butting. It is possible that Stegoceras and similar pachycephalosaurs would have delivered the blows with a movement of the neck from the side and a rotation of the head. The upper sides of the dome have the greatest surface area, and may have been the point of impact. The thickness of the dome would have increased the power behind a blow to the sides, and this would ensure that the opponent felt the force of the impact, without being seriously injured. The bone rim above the orbit may have protected the aggressor's eye when making a blow. Carpenter suggested that the pachycephalosaurs would have first engaged in threat display by bobbing and presenting their heads to show the size of their domes (intimidation), and thereafter delivered blows to each other, until one opponent signalled submission. In 2008, Eric Snively and Adam Cox tested the performance of 2D and 3D pachycephalosaur skulls through finite element analysis, and found that they could withstand considerable impact; greater vaulting of the domes allowed for higher forces of impact. They also considered it likely that pachycephalosaur domes were covered in keratin, a strong material that can withstand much energy without being permanently damaged (like the osteoderms of crocodilians), and therefore incorporated keratin into their test formula. In 2011, Snively and Jessica M. Theodor conducted a finite element analysis by simulating head-impacts with CT scanned skulls of S. validum (UALVP 2), Prenocephale prenes and several extant head-butting artiodactyls. They found that the correlations between head-striking and skull morphologies found in the living animals also existed in the studied pachycephalosaurs. Stegoceras and Prenocephale both had skull shapes similar to the bighorn sheep with cancellous bone protecting the brain. They also shared similarities in the distribution of compact and cancellous regions with the bighorn sheep, white-bellied duiker and the giraffe. The white-bellied duiker was found to be the closest morphological analogue to Stegoceras; this head-butting species has a dome which is smaller but similarly rounded. Stegoceras was better capable of dissipating force than artiodactyls that butt heads at high forces, but the less vascularized domes of older pachycephalosaurs, and possibly diminished ability to heal from injuries, argued against such combat in older individuals. The study also tested the effects of a keratinous covering of the dome, and found it to aid in performance. Though Stegoceras lacked the pneumatic sinuses that are found below the point of impact in the skulls of head-striking artiodactyls, it instead had vascular struts which could have similarly acted as braces, as well as conduits to feed the development of a keratin covering. In 2012, Caleb M. Brown and Anthony P. Russell suggested that the stiffened tails were probably not used as defence against flank-butting, but may have enabled the animals to take a tripodal stance during intra-specific combat, with the tail as support. Brown and Russell found that the tail could thereby help in resisting compressive, tensile, and torsional loading when the animal delivered or received blows with the dome. A 2013 study by Joseph E. Peterson and colleagues identified lesions in skulls of Stegoceras and other pachycephalosaurs, which were interpreted as infections caused by trauma. Lesions were found on 22% of sampled pachycephalosaur skulls (a frequency consistent across genera), but were absent from flat-headed specimens (which have been interpreted as juveniles or females), which is consistent with use in intra-specific combat (for territory or mates). The distribution of lesions in these animals tended to concentrate at the top of the dome, which supports head-butting behaviour. Flank-butting would probably result in fewer injuries, which would instead be concentrated on the sides of the dome. These observations were made while comparing the lesions with those on the skulls and flanks of modern sheep skeletons. The researchers noted that modern head-butting animals use their weapons for both combat and display, and that pachycephalosaurs could therefore also have used their domes for both. Displaying a weapon and willingness to use it can be enough to settle disputes in some animals. Bryan R. S. Moore and colleagues examined and reconstructed the limb musculature of Stegoceras in 3D in 2022, using the very complete UALVP 2 specimen as basis. They found that the musculature of the forelimbs was conservative, particularly compared to those of early bipedal saurischian dinosaurs, but the pelvic and hindlimb musculature was instead more derived (or "advanced"), due to peculiarities of the skeleton. These areas had large muscles, and combined with the wide pelvis and stout hind limbs (and possibly enlarged ligaments), this resulted in a strong, stable pelvic structure that would have helped during head-butting between individuals. Since the skull domes of pachycephalosaurs grew with positive allometry, and may have been used in combat, these researchers suggested it may have been the case for the hindlimb muscles as well, if they were used to propel the body forwards during head-butting. They cautioned that while UALVP 2 is very complete for a pachycephalosaur, their study was limited by it missing large portions of its vertebral column and outer limb elements. Other suggested functions In 1987, J. Keith Rigby and colleagues suggested that pachycephalosaur domes were heat-exchange organs used for thermoregulation, based on their internal "radiating structures" (trabeculae). This idea was supported by a few other writers in the mid-1990s. In 1998, Goodwin and colleagues considered the lack of sinuses in the skull of Stegoceras and the "honeycomb"-like network of vascular bone in the dome ill-suited for head-butting, and pointed out that the bones adjacent to the dome risked fracture during such contact. Building on the idea that the ossified tendons that stiffened the tails of Stegoceras and other pachycephalosaurs enabled them to take a tripodal stance (first suggested by Maryańska and Osmólska in 1974), Goodwin et al. suggested these structures could have protected the tail against flank-butting, or that the tail itself could have been used as a weapon. In 2004, Goodwin and colleagues studied the cranial histology of pachycephalosaurs, and found that the vascularity (including the trabeculae) of the domes decreased with age, which they found inconsistent with a function in either head-butting or heat-exchange. They also suggested that a dense layer of Sharpey's fibers near the surface of the dome indicated that it had an external covering in life, which makes it impossible to know the shape of the dome in a living animal. These researchers instead concluded that the domes were mainly for species recognition and communication (as in some African bovids) and that use in sexual display was only secondary. They further speculated that the external covering of the domes was brightly coloured in life, or may have changed colour seasonally. In 2011, American palaeontologists Kevin Padian and John R. Horner proposed that "bizarre structures" in dinosaurs in general (including domes, frills, horns, and crests) were primarily used for species recognition, and dismissed other explanations as unsupported by evidence. Among other studies, these authors cited Goodwin et al.'s 2004 paper on pachycephalosaur domes as support of this idea, and they pointed out that such structures did not appear to be sexually dimorphic. In a response to Padian and Horner the same year, Rob J. Knell and Scott D. Sampson argued that species recognition was not unlikely as a secondary function for "bizarre structures" in dinosaurs, but that sexual selection (used in display or combat to compete for mates) was a more likely explanation, due to the high cost of developing them, and because such structures appear to be highly variable within species. In 2013, the British palaeontologists David E. Hone and Darren Naish criticized the "species recognition hypothesis", and argued that no extant animals use such structures primarily for species recognition, and that Padian and Horner had ignored the possibility of mutual sexual selection (where both sexes are ornamented). In 2012, Schott and Evans suggested that the regularity in squamosal ornamentation throughout the ontogeny of Stegoceras was consistent with species recognition, but the change from flat to domed frontoparietals in late age suggests that the function of this feature changed through ontogeny, and was perhaps sexually selected, possibly for intra-specific combat. Dyer and colleagues found in 2023 that Stegoceras specimens differed in the thickness of the frontonasal boss, and that skulls with the most bone pathologies were those with the tallest bosses, which they considered indication that variation in boss thickness represents intersexual variation. In 2023, Horner and colleagues stated that since the dome and associated ornamentation of Stegoceras and the ornamentation of Pachycephalosaurus developed early in life, this indicates they were used for visual communication, so that juveniles could recognise other juveniles and adults other adults. They did not rule out that these features could have been used for other purposes, including head-butting, but did not consider trauma seen in specimens as evidence for this. They also suggested that features in some pachycephalosaurid skulls indicate the dome would have supported a greater, keratinous structure than just a cap. Palaeoenvironment S. validum is known from the late Late Cretaceous Belly River Group (the Canadian equivalent to the Judith River Group in the US), and specimens have been recovered from the Dinosaur Park Formation (late Campanian, 76.5 to 75 mya) in Dinosaur Provincial Park (including the lectotype specimen), and the Oldman Formation (middle Campanian, 77.5 to 76.5 mya) of Alberta, Canada. The pachycephalosaurs Hanssuesia (if not a synonym of Stegoceras) and Foraminacephale are also known from both formations. S. novomexicanum is known from the Fruitland (late Campanian, about 75 mya) and lower Kirtland Formation (late Campanian, about 74 mya) of New Mexico, and if this species correctly belongs in Stegoceras, the genus would have had a broad geographic distribution. The presence of similar pachycephalosaurs in both the west and north of North America during the latest Cretaceous shows that they were an important part of the dinosaur faunas there. It has traditionally been suggested that pachycehalosaurs inhabited mountain environments; wear of their skulls was supposedly a result of them having been rolled by water from upland areas, and comparisons with bighorn sheep reinforced the theory. In 2014, Jordan C. Mallon and Evans disputed this idea, as the wear and original locations of the skulls is not consistent with having been transported in such a way, and they instead proposed that North American pachycephalosaurs inhabited alluvial (associated with water) and coastal plain environments. The Dinosaur Park Formation is interpreted as a low-relief setting of rivers and floodplains that became more swampy and influenced by marine conditions over time as the Western Interior Seaway transgressed westward. The climate was warmer than present-day Alberta, without frost, but with wetter and drier seasons. Conifers were apparently the dominant canopy plants, with an understory of ferns, tree ferns, and angiosperms. Dinosaur Park is known for its diverse community of herbivores. As well as Stegoceras, the formation has also yielded fossils of the ceratopsians Centrosaurus, Styracosaurus and Chasmosaurus, the hadrosaurids Prosaurolophus, Lambeosaurus, Gryposaurus, Corythosaurus, and Parasaurolophus, and the ankylosaurs Edmontonia and Euoplocephalus. Theropods present include the tyrannosaurids Gorgosaurus and Daspletosaurus. Other dinosaurs known from the Oldman Formation include the hadrosaur Brachylophosaurus, the ceratopsians Coronosaurus and Albertaceratops, ornithomimids, therizinosaurs and possibly ankylosaurs. Theropods included troodontids, oviraptorosaurs, the dromaeosaurid Saurornitholestes and possibly an albertosaurine tyrannosaur.
Biology and health sciences
Ornitischians
Animals
2274247
https://en.wikipedia.org/wiki/Psittacosaurus
Psittacosaurus
Psittacosaurus ( ; "parrot lizard") is a genus of extinct ceratopsian dinosaur from the Early Cretaceous of what is now Asia, existing between 125 and 105 million years ago. It is notable for being the most species-rich non-avian dinosaur genus. Up to 12 species are known, from across China, Mongolia, Russia, and Thailand. The species of Psittacosaurus were obligate bipeds at adulthood, with a high skull and a robust beak. One individual was found preserved with long filaments on the tail, similar to those of Tianyulong. Psittacosaurus probably had complex behaviours, based on the proportions and relative size of the brain. It may have been active for short periods of time during the day and night, and had well-developed senses of smell and vision. Psittacosaurus was one of the earliest ceratopsians, but closer to Triceratops than Yinlong. Once in its own family, Psittacosauridae, with other genera like Hongshanosaurus, it is now considered to be senior synonym of the latter and an early offshoot of the branch that led to more derived forms. The genera closely related to Psittacosaurus are all from Asia, with the exception of Aquilops, from North America. The first species was either P. lujiatunensis or closely related, and it may have given rise to later forms of Psittacosaurus. Psittacosaurus is one of the most completely known dinosaur genera. Fossils of hundreds of individuals have been collected so far, including many complete skeletons. Most age classes are represented, from hatchling through to adult, which has allowed several detailed studies of Psittacosaurus growth rates and reproductive biology. The abundance of this dinosaur in the fossil record has led to the labelling of Lower Cretaceous sediments of east Asia the Psittacosaurus biochron. History of discovery In 1922, American paleontologist Henry Fairfield Osborn took part in the Third Asiatic Expedition of the American Museum of Natural History to discover fossils and geologic formations from the Cretaceous and Tertiary of Mongolia. In the Oshih Formation of the Artsa Bogdo Basin, Wong, the Mongolian chauffeur, discovered a nearly complete skull, jaws, and skeleton of a dinosaur, which was given the nickname of "Red Mesa skeleton". The location of discovery is also known as the Oshih locality of the Khukhtek Formation, of Early Cretaceous Aptian to Albian age. The specimen, catalogued as AMNH 6254, was described in 1924 by Osborn, only partially prepared, who gave it the name Psittacosaurus mongoliensis, describing its parrot-like beak on the suggestion of fellow American paleontologist William King Gregory. Osborn demonstrated the taxon was unique based on the short and deep snout, and the broad rear skull, as well as by lacking teeth in the premaxilla. In the same paper, Osborn also described another new taxon he considered similar to Psittacosaurus, Protiguanodon mongoliense, which was found in the same expedition but from the Ondai Sair Formation. The holotype of Protiguanodon, AMNH 6253, included a nearly complete skeleton found articulated, and partial remains of the skull. While Osborn considered Protiguanodon and Psittacosaurus separate based on the lack of horns on the jugal bones in Protiguanodon, a general dissimilarity in the skeletons, and wide geographic separation of the two specimens, Gregory suggested in correspondence that the Protiguanodon specimen could represent a juvenile of Psittacosaurus, based on similarities in size, the parietal bones, and the quadrate bones. Osborn created the new family Psittacosauridae for Psittacosaurus, which he considered possibly related to Ankylosauria, while he placed Protiguanodon within the family Iguanodontidae as the only member of the new subfamily Protiguanodontinae. Osborn published an additional description of the specimens of Protiguanodon and Psittacosaurus in 1924, citing his previous study as naming both to be members of Psittacosauridae, and considering the separate status of Protiguanodontinae as uncertain. Further preparation of the skeleton of AMNH 6254 showed significant similarities in the skeletons of Psittacosaurus and Protiguanodon, including the number of teeth, the number of pre-caudal vertebrae, and other details of the skull and skeleton. Osborn also referred the specimen AMNH 6261 from the Oshih Formation to Psittacosaurus, so the teeth of the two taxa could be compared. It was mentioned in 1932 by American paleontologist Roy Chapman Andrews that AMHN 6254 was the only good specimen that could be found at Oshih, with only one additional skull and jaws of an adult, and two hatchling skulls, having been found in a later revisit to the locality in 1923. Following the discovery of material of psittacosaurids in Haratologay in Inner Mongolia, Yang Zhongjian described two additional species in 1932. Known from a crushed skull and fragmentary lower jaw, Young named Psittacosaurus osborni, distinguished by its small size and lack of a sagittal crest on the parietal. The second species, P. tingi, was named for partial lower jaws and teeth, which Young only tentatively referred to Psittacosaurus instead of Protiguanodon. Both specimens, stored in the Institute of Vertebrate Paleontology and Paleoanthropology as IVPP RV31039 and IVPP RV31040 respectively, come from the Xinpongnaobao Formation. An additional tooth, partial hand, and fragments of vertebrae and limbs were found in the same locality, with the tooth being referred to Protiguanodon and the remainder of the material being uncertain. Additional Psittacosaurus material from possibly the same locality was described later in 1953 by Birger Bohlin, who considered the remains to likely belong to P. mongoliensis. The Soviet Expeditions into Mongolia from 1946 to 1949 uncovered more material of Psittacosaurus. In 1946 they discovered a new locality, Ulan Osh, where a disarticulated specimen of Psittacosaurus mongoliensis was found, and in 1948 they revisited the sites of the American expeditions and excavated fragmentary postcrania from Oshih and Ondai Sair. The material from these expeditions was taken to the Paleontological Institute of Moscow. Soviet excavations near Kemerovo in Siberia also discovered a partial skull and skeleton of multiple individuals referrable to Psittacosaurus. This material was described by Soviet paleontologist Anatoly Rozhdestvensky in 1955, who also proposed that Protiguanodon mongoliense, Psittacosaurus osborni, and Psittacosaurus tingi were junior synonyms of Psittacosaurus mongoliensis. In 1958, Yang published a paper on the dinosaurs of Laiyang, in which he described multiple discoveries of Psittacosaurus from a collection of localities of the Qingshan Formation. Of this material, the nearly complete skeleton and skull IVPP V738 was described as the type of the new species Psittacosaurus sinensis, which was found in a red layer northwest of Rongyang City in Shandong. Yang also assigned 11 other specimens to the taxon, considering it to be the most diverse Psittacosaurus species known at the time. It was distinguished from the other known species by a shorter and wider snout, and an overall smaller size at . Yang also revised the classifications of the other species of Psittacosaurus. Following similar conclusions to Rozhdestvensky, Yang considered Protiguanodon to be a junior synonym of Psittacosaurus, but retained the species as separate giving former Protiguanodon mongoliense the new species name Psittacosaurus protiguanodonensis, as otherwise both it and Psittacosaurus mongoliensis would have the same species name. Contrasting Rozhdestvensky, Yang retained the earlier Chinese species P. osborni and P. tingi as separate from P. mongoliensis, but not separate from each other, making P. tingi a junior synonym of P. osborni. Following his new breakdown of species, Yang described the distribution of the genus Psittacosaurus: P. sinensis was the only species known from Shandong; P. osborni and possibly P. mongoliensis were both known from Haratologay (also known as Tebch); P. mongoliensis and P. protiguanodonensis were both known form Oshih; and P. mongoliensis was possibly known from Kemerovo. Further discoveries in the Qingshan Formation of Laiyang in 1958 were described by Zhao Xijin in 1962, giving the new name Psittacosaurus youngi for the specimen BPV.149 in the Beijing Museum of Natural History. Known for a complete skull, partial vertebral series and partial pelvis, P. youngi was distinguished by Zhao by having the shortest skull of all species, vertebral and tooth counts, and various features of the skull and skeleton. P. youngi was considered to be most similar to P. sinensis, but separated them to bring the count of members of Psittacosauridae to one genus and five species. Many later expeditions by various combinations of Mongolian, Russian, Chinese, American, Polish, Japanese, and Canadian paleontologists also recovered specimens from throughout Mongolia and northern China. In these areas, Psittacosaurus mongoliensis fossils are found in most sedimentary strata dating to the Aptian to Albian stages of the Early Cretaceous Period, or approximately 125 to 100 mya. Fossil remains of over 75 individuals have been recovered, including nearly 20 complete skeletons with skulls. Individuals of all ages are known, from hatchlings less than long, to very old adults reaching nearly in length. In a 2010 review, Sereno again regarded P. osborni as a synonym of P. mongoliensis, but noted it was tentative because of the presence of multiple valid psittacosaur species in Inner Mongolia. Young also described the species P. tingi in the same 1931 report which contained P. osborni. It is based on several skull fragments. He later synonymised the two species under the name P. osborni. You and Dodson (2004) followed this in a table, but Sereno regarded both species as synonyms of P. mongoliensis; a table in the latter reported P. tingi as a nomen dubium, however. The front half of a skull from Guyang County in Inner Mongolia was described as Psittacosaurus guyangensis in 1983. Disarticulated postcranial remains representing multiple individuals were found at the same locality and were assigned to the species. While it differs from the type specimen of P. mongoliensis, it falls within the range of individual variation seen in other specimens of that species and is no longer recognised as a valid species. You and Dodson (2004) included P. guyangensis in a table of valid taxa, but did not include it as such in their text. Assigned species Seventeen species have been referred to the genus Psittacosaurus, although only nine to eleven are considered valid today. This is the highest number of valid species currently assigned to any single non-avian dinosaur. In contrast, most other dinosaur genera are monospecific, containing only a single known species. The difference is most likely due to artifacts of the fossilisation process. While Psittacosaurus is known from hundreds of fossil specimens, most other dinosaur species are known from far fewer, and many are represented by only a single specimen. With a very high sample size, the diversity of Psittacosaurus can be analysed more completely than that of most dinosaur genera, resulting in the recognition of more species. Most extant animal genera are represented by multiple species, suggesting that this may have been the case for extinct dinosaur genera as well, although most of these species may not have been preserved. In addition, most dinosaurs are known solely from bones and can only be evaluated from a morphological standpoint, whereas extant species often have very similar skeletal morphology but differ in other ways which would not normally be preserved in the fossil record, such as behaviour, or colouration. Therefore, actual species diversity may be much higher than currently recognised in this and other dinosaur genera. As some species are known only from skull material, species of Psittacosaurus are primarily distinguished by features of the skull and teeth. Several species can be recognised by features of the pelvis as well. P. sinensis In the 1950s, a new Chinese species of Psittacosaurus was found in the Aptian-Albian Qingshan Formation of Shandong Province, southeast of Beijing. C. C. Young called it P. sinensis to differentiate it from P. mongoliensis, which had originally been found in Mongolia. Fossils of more than twenty individuals have since been recovered, including several complete skulls and skeletons, making this the most well-known species after P. mongoliensis. Chinese paleontologist Zhao Xijin named a new species after his mentor, C. C. Young, in 1962. However, the type specimen of P. youngi (a partial skeleton and skull) was discovered in the same rocks as P. sinensis and appears to be very similar, so P. youngi is generally considered a junior synonym of that better-known species. As with P. guyangensis and P. osborni, You and Dodson (2004) listed it as valid in a table, but not in their text. P. xinjiangensis In 1988, Zhao and American paleontologist Paul Sereno described P. xinjiangensis, named after the Xinjiang Autonomous Region in which it was discovered. Several individuals of different ages were discovered in the early 1970s by Chinese paleontologists and described by Sereno and Zhao, although the holotype and most complete skeleton belonged to a juvenile. An adult skeleton was later discovered at a different locality in Xinjiang. These specimens come from the upper part of the Tugulu Group, which is regarded as Aptian-Albian in age. P. meileyingensis A second species described in 1988 by Sereno and Zhao, along with two Chinese colleagues, was P. meileyingensis from the Jiufotang Formation, near the town of Meileyingzi, Liaoning Province, northeastern China. This species is known from four fossil skulls, one associated with some skeletal material, found in 1973 by Chinese scientists. The age of the Jiufotang in Liaoning is unknown, but in the neighbouring province of Inner Mongolia, it has been dated to about 110 Ma, in the Albian stage of the Early Cretaceous. P. sattayaraki French paleontologist Eric Buffetaut and a Thai colleague, Varavudh Suteethorn, described a partial upper and lower jaw from the Aptian-Albian Khok Kruat Formation of Thailand in 1992, giving it the name P. sattayaraki. In 2000, Sereno questioned the validity of this species, citing its eroded and fragmentary nature, and noted an absence of features characteristic of the genus Psittacosaurus. However, in 2002 the original authors published new images of the fossil which seem to show teeth in the lower jaw that exhibit the bulbous vertical ridge characteristic of psittacosaurs. Other authors have also defended its validity, while some continue to regard it as dubious. Sereno (2010) proposed that the best assignment for the type material may be Ceratopsia incertae sedis. P. neimongoliensis and P. ordosensis? Two new species of Psittacosaurus were described by Canadian Dale Russell and Zhao in 1996. The first was named P. neimongoliensis, after the Mandarin Chinese name for Inner Mongolia. It is based on a nearly complete fossil skeleton, including most of the skull, found in the Early Cretaceous Ejinhoro Formation with seven other individuals. Russell and Zhao also named P. ordosensis in 1996, after the Ordos prefecture of the Inner Mongolia Autonomous Region. The type specimen is a nearly complete skeleton, including part of the skull. However, only the skull, lower jaw, and foot have been described. Three other specimens were referred to this species but remain undescribed. Like P. neimongoliensis, this species was discovered in the Eijnhoro Formation. Sereno (2010) found the species as described to be indistinguishable from P. sinensis, another small species, but suggested that additional study of P. ordosensis might reveal diagnostic features. He provisionally designated P. ordosensis a nomen dubium. P. mazongshanensis? Xu Xing, another Chinese paleontologist, named a new species of Psittacosaurus in 1997, based on a complete skull with associated vertebrae and a forelimb. This material was recovered in Gansu Province, near the border with Inner Mongolia. This species is named P. mazongshanensis after the nearby mountain called Mazongshan (Horse Mane Mountain) and has been described in a preliminary manner. Unfortunately, the skull was damaged while in the care of the Chinese Institute of Vertebrate Paleontology and Paleoanthropology (IVPP), and several fragments have been lost, including all of the teeth. The remains were found in the Lower Xinminbao Formation, which have not been precisely dated, although there is some evidence that they were deposited in the late Barremian through Aptian stages. Sereno suggested in 2000 that P. mazongshanensis was a nomen dubium, with no unique features that separate it from any other species of Psittacosaurus. However, more recent authors have noted that it can be distinguished by its proportionally long snout compared to other species of Psittacosaurus, as well as a prominent bony protuberance, pointing outwards and downwards, on the maxilla of the upper jaw. The maxillary protuberance is also now missing. Other features originally used to distinguish the species have been recognised as the results of the deformation of the skull after fossilisation. Sereno (2010) remained unconvinced of its validity. P. sibiricus Beginning in the 1950s, Russian paleontologists began excavating Psittacosaurus remains at a locality near Shestakovo village in Kemerovo Oblast in Western Siberia. Two other nearby localities were explored in the 1990s, one of which produced several complete skeletons. This species was named P. sibiricus in 2000 in a scientific paper written by five Russian paleontologists, but credit for the name is officially given to two of those authors, Alexei Voronkevich and Alexander Averianov. The remains were not completely described until 2006. Two nearly complete, articulated skeletons and a variety of disarticulated material from other individuals of all ages are known from the Ilek Formation of Siberia, which ranges from the Barremian to Aptian stages of the Early Cretaceous. Individuals of this species could grow up to 2.5 meters in length, making it one of the largest members of the genus. P. lujiatunensis P. lujiatunensis, named in 2006 by Chinese paleontologist Zhou Chang-Fu and three Chinese and Canadian colleagues, is one of the oldest-known species, based on four skulls from the lower beds of the Yixian Formation, near the village of Lujiatun. While this bed has been dated differently by different authors, from 128 Ma in the Barremian stage, to 125 Ma in the earliest Aptian, revised dating methods have shown them to be about 123 million years old. P. lujiatunensis was contemporaneous with another psittacosaurid species, Hongshanosaurus houi, which was found in the same beds. It is potentially synonymous with H. houi; Sereno (2010), who proposed that Hongshanosaurus is a synonym of Psittacosaurus, opted to leave P. lujiatunensis and H. houi separate species due to the inadequacies of the latter's type specimen. P. major One nearly complete skeleton of P. lujiatunensis from the same lower beds of the Yixian Formation had previously been classified in its own species, Psittacosaurus major, named for the large size of its skull by Sereno, Zhao and two colleagues in 2007. You and colleagues described an additional specimen and concurred that it was distinct from P. lujiatunensis. P. major was originally characterised by a proportionately large skull, which was 39% of the length of its torso, compared to 30% in P. mongoliensis, and other features. However, a 2013 study utilising morphometric analysis showed that the supposed differences between P. lujiatunensis and P. major were due to differences in preservation and crushing. The study concluded that both represented a single species. P. houi? A third species of Lujiatun psittacosaur, the first to be named, was described as Hongshanosaurus houi in 2003. The generic name Hongshanosaurus was derived from the Mandarin Chinese words 紅 (hóng: "red") and 山 (shān: "hill"), as well as the Greek word sauros ("lizard"). This name refers to the ancient Hongshan culture of northeastern China, who lived in the same general area in which the fossil skull of Hongshanosaurus was found. The type and only named species, H. houi, honours Hou Lianhai, a professor at the IVPP in Beijing, who curated the specimen. Genus and species were both named by Chinese paleontologists You Hailu, Xu Xing, and Wang Xiaolin in 2003. Sereno (2010) regarded its distinct proportions as due to crushing and compression of the Hongshanosaurus skulls. He regarded Hongshanosaurus as a junior synonym of Psittacosaurus, and potentially the same as P. lujiatunensis. He did not synonymise the two species because of difficulties with the holotype skull of H. houi, instead considering new combination P. houi a nomen dubium within Psittacosaurus. Sereno's hypothesis was supported by a morphometric study in 2013, which found P. houi and P. lujiatunensis to be synonymous. While P. houi is the oldest available name, the researchers argued that because the type specimen of P. lujiatunensis was better preserved, the correct name for this species should be P. lujiatunensis rather than P. houi, which would normally have priority. P. gobiensis P. gobiensis is named for the region it was found in 2001, and first described by Sereno, Zhao and Lin in 2010. It is known from a skull and partial articulated skeleton with gastroliths. Many other specimens either cannot be determined to belong to any particular species, or have not yet been assigned to one. These specimens are generally all referred to as Psittacosaurus sp., although it is not assumed that they belong to the same species. More than 200 specimens of Psittacosaurus have been found in the Yixian Formation, which is famous for its fossils of feathered dinosaurs. The vast majority of these have not been assigned to any published species, although many are very well preserved and some have already been partially described. Nearly 100 Psittacosaurus skeletons were excavated in Mongolia during the summers of 2005 and 2006 by a team led by Mongolian paleontologist Bolortsetseg Minjin and American Jack Horner from the Museum of the Rockies in Montana. Although only P. mongoliensis has been described from Mongolia so far, these specimens are still in preparation and have not yet been assigned to a species. P. amitabha P. amitabha was named by Napoli et al. in 2019 from a complete skull and partial skeleton. recovered in the Barremian Andakhuduk Formation of Mongolia. It is named after Amitabha Buddha. Description The species of Psittacosaurus vary in size and specific features of the skull and skeleton, but share the same overall body shape. The best-known—P. mongoliensis—can reach 2 metres (6.5 ft) in length. The maximum adult body weight was most likely over 20 kilogrammes (44 lb) in P. mongoliensis. Several species approach P. mongoliensis in size (P. lujiatunensis, P. neimongoliensis, P. xinjiangensis), while others are somewhat smaller (P. sinensis, P. meileyingensis). The smallest known species, P. ordosensis, is 30% smaller than P. mongoliensis. The largest are P. lujiatunensis and P. sibiricus, although neither is significantly larger than P. mongoliensis. Psittacosaurus postcranial skeletons are more typical of a 'generic' bipedal ornithischian. There are only four digits on the manus ('hand'), as opposed to the five found in most other ornithischians (including all other ceratopsians), while the four-toed hindfoot is very similar to many other small ornithischians. The skull of Psittacosaurus is highly modified compared to other ornithischian dinosaurs of its time. Extremely tall in height and short in length, the skull has an almost round profile in some species. The portion in front of the orbit (eye socket) is only 40% of total skull length, shorter than any other known ornithischian. The lower jaws of psittacosaurs are characterised by a bulbous vertical ridge down the centre of each tooth. Both upper and lower jaws sport a pronounced beak, formed from the rostral and predentary bones, respectively. The bony core of the beak may have been sheathed in keratin to provide a sharp cutting surface for cropping plant material. As the generic name suggests, the short skull and beak superficially resemble those of modern parrots. Psittacosaurus skulls share several adaptations with more derived ceratopsians, such as the unique rostral bone at the tip of the upper jaw, and the flared jugal (cheek) bones. There is still no sign of the bony neck frill or prominent facial horns which would develop in later ceratopsians. Bony horns protrude from the skull of P. sibiricus, but these are thought to be an example of convergent evolution. Soft tissue and coloration The integument, or body covering, of Psittacosaurus is known from a Chinese specimen, SMF R 4970, which most likely comes from the Yixian Formation of Liaoning Province, China. The specimen, which is not yet assigned to any particular species, was likely illegally exported from China and was purchased in 2001 by the Senckenberg Museum in Germany. It was described while awaiting repatriation; previous repatriation attempts were unsuccessful. Most of the body was covered in scales. Larger scales were arranged in irregular patterns, with numerous smaller scales occupying the spaces between them, similarly to skin impressions known from other ceratopsians, such as Chasmosaurus. A series of what appear to be hollow, tubular bristle-like structures, approximately long, were also preserved, arranged in a row down the dorsal (upper) surface of the tail. These were confirmed by the authors, as well as an independent scientist, to not represent plant material. The bristle-like integumentary structures extend into the skin nearly to the vertebrae, and were likely circular or tubular before being preserved. Under ultraviolet light, they gave off the same fluorescence as scales, providing the possibility they were keratinized. The study stated that, "at present, there is no convincing evidence which shows these structures to be homologous to the structurally different integumentary filaments of theropod dinosaurs". However, they found that all other feather-like integument from the Yixian Formation could be identified as feathers. In 2008, another study was published describing the integument and dermis of Psittacosaurus sp., from a different specimen. The skin remains could be observed by a natural cross-section to compare them to modern animals, showing that dinosaurian dermal layers evolved in parallel to those in many other large vertebrates. The collagen tissue fibres in Psittacosaurus are complex, virtually identical to all other vertebrates in structure but having an exceptional thickness of about forty layers. As the sections of dermis were collected from the abdomen, where the scales were eroded, the tissue may have assisted with the musculature of the stomach and intestines and offered protection against predators. As described in a 2016 study, examination of melanosomes preserved in the specimen of Psittacosaurus preserved with integument indicated that the animal was countershaded, likely related to living in a dense forest habitat with little light, much like many modern species of forest-dwelling deer and antelope; stripes and spots on the limbs may represent disruptive coloration. The specimen also had dense clusters of pigment on its shoulders, face (possibly for display), and cloaca (which may have had an antimicrobial function, though this has been disputed), as well as large patagia on its hind legs that connected to the base of the tail. Its large eyes indicate that it also likely had good vision, which would have been useful in finding food or avoiding predators. The authors pointed out that there might have been variation in coloration across the range of the animal, depending on differences in the light environment. The authors were unable to determine which species of Jehol Formation Psittacosaurus the specimen belonged to due to the way the skull is preserved, but ruled out P. mongoliensis, based on hip features. Another 2016 study used laser-stimulated fluorescence imaging to analyze the internal structure of the bristles. The highly cornified bristles were arranged in tight clusters of three to six individual bristles, with each bristle being filled with pulp. The authors considered the bristles as being most similar to the quills of Tianyulong, and the sparsely distributed elongated broad filamentous feathers (EBFFs) of Beipiaosaurus. Similar, non-feather-derived bristles are found in a few extant birds such as the "horn" on the horned screamer and the "beards" of turkeys; these structures differ from feathers in that they are unbranched, heavily cornified and do not develop from a follicle, but instead arise from discrete cell populations that exhibit continuous growth. A 2016 study by Ji Qiang and colleagues was published in the Journal of Geology. Their conclusion was that these were actually highly modified scales because the morphology and anatomy did not resemble feathers. A darkened soft-tissue structure was also found near the jugal horn; this may represent a keratinous sheath or a skin flap. A 2021 study of SMF R 4970 examined its cloaca, the first one known from a non-avian dinosaur. The positioning of the individual when it died is oriented obliquely, so the structure can be seen better in the right side. Psittacosaurus''' cloaca is comparable to those of crocodilians, with discrete lateral lips that converge anteriorly, giving the cloaca a v-shape anatomy. It also shows resemblance to that of birds, with the dorsal lobe being homologous to the birds' cloacal protuberance. A 2022 study of SMF R 4970 identified it as an approximately 6–7 year old subadult by comparing its femoral length to that of similarly-aged specimens of P. lujiatunensis, and found that it preserves the first umbilicus (belly button) known from a non-avian dinosaur (the oldest known from an amniote). Because the specimen is close to sexual maturity, it is likely that the umbilicus probably retained throughout this individual's life and that Psittacosaurus had its umbilicus at least until sexual maturity. It is uncertain whether the umbilicus is present in mature or nearly mature individuals of all non-avian dinosaurs. Species characteristics Skulls of P. mongoliensis are flat on top, especially over the back of the skull, with a triangular depression, the antorbital fossa, on the outside surface of the maxilla (an upper jaw bone). A flange is present on the lower edge of the dentary (the tooth-bearing bone of the lower jaw), although it is not as prominent as in P. meileyingensis or P. major (=P. lujiatunensis). P. mongoliensis is among the largest known species. The skull of the type specimen, which is probably a juvenile, is 15.2 centimetres (6 in) long, and the associated femur is 16.2 centimetres (6.4 in) in length. Other specimens are larger, with the largest documented femur measuring about 21 centimetres (8.25 in) long.P. sinensis is readily distinguished from all other species by numerous features of the skull. Adult skulls are smaller than those of P. mongoliensis and have less teeth. Uniquely, the premaxillary bone contacts the jugal (cheek) bone on the outside of the skull. The jugals flare out sideways, forming 'horns' proportionally wider than in any other known Psittacosaurus species except P. sibiricus and P. lujiatunensis. Because of the flared cheeks, the skull is actually wider than it is long. A smaller 'horn' is present behind the eye, at the contact of the jugal and postorbital bones, a feature also seen in P. sibiricus. The mandible (lower jaw) lacks the hollow opening, or fenestra, seen in other species, and the entire lower jaw is bowed outwards, giving the animal the appearance of an underbite. The skull of an adult P. sinensis can reach 11.5 centimeters (4.5 in) in length.P. sibiricus is the largest-known species of Psittacosaurus. The skull of the type specimen is 20.7 centimetres long (8.25 in), and the femur is 22.3 cm (8.75 in) in length. It is also distinguished by its neck frill, which is longer than any other species, at 15 to 18% of skull length. A very striking feature of P. sibiricus is the number of 'horns' around the eyes, with three prominences on each postorbital, and one in front of each eye, on the palpebral bones. Similar horns found on the postorbital of P. sinensis are not as pronounced but may be homologous. The jugal has extremely prominent 'horns' and may contact the premaxilla, both features also seen in the possibly related P. sinensis. There is a flange on the dentary of the lower jaw, similar to P. mongoliensis, P. meileyingensis, and P. sattayaraki. It can be told apart from the other species of Psittacosaurus by a combination of 32 anatomical features, including six that are unique to the species. Most of these are skull details, but one unusual feature is the presence of 23 vertebrae between the skull and pelvis, unlike the 21 or 22 in the other species where the vertebrae are known.P. xinjiangensis is distinguished by a prominent jugal 'horn' that is flattened on the front end, as well as some features of the teeth. The ilium, one of the three bones of the pelvis, also bears a characteristically long bony process behind the acetabulum (hip socket). An adult femur has a published length of about 16 centimetres (6.3 in). P. meileyingensis has the shortest snout and neck frill of any species, making the skull nearly circular in profile. The orbit (eye socket) is roughly triangular, and there is a prominent flange on the lower edge of the dentary, a feature also seen in specimens of P. lujiatunensis, and to a lesser degree in P. mongoliensis, P. sattayaraki, and P. sibiricus. The complete type skull, probably adult, is 13.7 centimetres (5.5 in) long. The dentary of P. sattayaraki has a flange similar to that found in P. mongoliensis, P. sibiricus, P. lujiatunensis and P. meileyingensis, although it is less pronounced than in those species. The material appears to be roughly the same size as P. sinensis. The frontal bone of P. neimongoliensis is distinctly narrow compared to that of other species, resulting in a narrower skull overall. The ischium bone of the pelvis is also longer than the femur, which differs from other species in which these bones are known. The type specimen has a skull length of 13.2 centimetres (5.2 in) and a femoral length of 13 centimetres (5.1 in), but is not fully grown. An adult P. neimongoliensis was probably smaller than P. mongoliensis, with a proportionately longer skull and tail. P. ordosensis can be distinguished by numerous features of the jugals, which have very prominent 'horns'. It is also the smallest known species. One adult skull measures only 9.5 centimeters (3.75 in) in length. The type skull of P. lujiatunensis measures 19 cm (7.5 in) in length, while the largest-known skull is 20.5 centimetres (8 in) long, so this species was similar in size to P. mongoliensis and P. sibiricus. There is a fossa in front of the eye, as in P. mongoliensis. The jugal bones flare outwards widely, making the skull wider than it is long, as seen in P. sinensis. Widely flared jugals are also found in P. sibiricus. Overall, this species is thought to exhibit several primitive characteristics compared to other species of Psittacosaurus, which is consistent with its greater geological age. P. gobiensis was small-bodied ( long) and differs from other species of Psittacosaurus by "significant, but structurally minor, details." These include the presence of a pyramidal horn on the postorbital, a depression on the postorbital-jugal contact, and enamel thickness. P. mongoliensis was a contemporary. Classification Psittacosaurus is the type genus of the family Psittacosauridae, which was also named by Osborn in 1923. Psittacosaurids were basal to almost all known ceratopsians except Yinlong and perhaps the Chaoyangsauridae. While Psittacosauridae was an early branch of the ceratopsian family tree, Psittacosaurus itself was probably not directly ancestral to any other groups of ceratopsians. All other ceratopsians retained the fifth digit of the hand, a plesiomorphy or primitive trait, whereas all species of Psittacosaurus had only four digits on the hand. In addition, the antorbital fenestra, an opening in the skull between the eye socket and nostril, was lost during the evolution of Psittacosauridae, but is still found in most other ceratopsians and in fact most other archosaurs. It is considered highly unlikely that the fifth digit or antorbital fenestra would evolve a second time. In 2014, the describers of a new taxon of basal ceratopsian published a phylogenetic analysis encompassing Psittacosaurus. The below cladogram is from their analysis, placing the genus as one of the most primitive ceratopsians. The authors (Farke et al.) noted that all taxa outside of Leptoceratopsidae and Coronosauria with the exception of their genus Aquilops are from Asia, meaning the group likely originated there. Although many species of Psittacosaurus have been named, their relationships to each other have not yet been fully explored and no scientific consensus exists on the subject. Several phylogenetic analyses have been published, with the most detailed being those by Alexander Averianov and colleagues in 2006, Hai-Lu You and colleagues in 2008, and Paul Sereno in 2010. The middle one is shown below. In 2005, Zhou and colleagues suggested that P. lujiatunensis is basal to all other species. This would be consistent with its earlier appearance in the fossil record. Paleobiology The brain of P. lujiatunensis is well known; a study on the anatomy and functionality of three specimens was published in 2007. Until the study, it was generally thought the brain of Psittacosaurus would have been similar to other ceratopsians with low encephalization quotients. Russell and Zhao (1996) believed "the small brain size of psittacosaurs implies a very restrictive behavioural repertoire relative to that of modern mammals of similar body size". However, the 2007 study dispelled this theory when it found the brain to be more advanced. There is generally negative allometry for brain size with development in vertebrates, but this was shown not to be true in Psittacosaurus. The EQ score for P. lujiatunensis is 0.31, significantly higher than genera such as Triceratops. A higher EQ correlates with more complex behaviour, and various dinosaurs have high EQs, similar to birds, which range from 0.36 to 2.98. Thus, Psittacosaurus behaviour could have been as complex as that in Tyrannosaurus, whose EQ ranges from 0.30 to 0.38. Behaviours influenced by high EQs include nest-building, parental care, and bird-like sleeping, some of which have been shown to be present in Psittacosaurus. The senses of Psittacosaurus can be inferred from the endocast. Large olfactory bulbs are present, indicating the genus had an acute sense of smell. The size of these bulbs are comparable to large predatory theropods, although they likely evolved to avoid predators instead of to seek out prey. The sclerotic rings in reptiles directly show the size of the eyeball. The rings are not well preserved in Psittacosaurus, with one individual preserving them likely contracted postmortem, but if they are similar to those of Protoceratops, Psittacosaurus would have had large eyes and acute vision. The curvature of the semicircular canals is related to the agility of reptiles, and the large curved canals in Psittacosaurus show that the genus was much more agile than later ceratopsians. Comparisons between the scleral rings of Psittacosaurus and modern birds and reptiles suggest that it may have been cathemeral, active throughout the day and for short intervals at night. Ford and Martin (2010) proposed that Psittacosaurus was semi-aquatic, swimming with its tail like a crocodile, and paddling and kicking. They based their interpretation on evidence including: the lacustrine (lake) depositional setting of many specimens; the position of the nostrils and eyes; interpretations of the motions of the arms and legs; tails with long chevrons (and with the bristles on the tail interpreted as possibly skin-covered, forming a fin), providing a propulsive surface; and the presence of gastroliths, interpreted as ballast. They further suggested that some species of Psittacosaurus were more terrestrial than others. Diet Psittacosaurs had self-sharpening teeth that would have been useful for cropping and slicing tough plant material. Unlike later ceratopsians, they did not have teeth suitable for grinding or chewing their food. Instead, they used gastroliths—stones swallowed to wear down food as it passed through the digestive system. Sometimes numbering more than fifty, these stones are occasionally found in the abdominal cavities of psittacosaurs, and may have been stored in a gizzard, as in modern birds. Unlike many other dinosaurs, psittacosaurs had akinetic skulls: that is to say, the upper and lower jaws each behaved as a single unit, without internal joints. The only joint was the jaw joint itself, and psittacosaurs could slide their lower jaws forward and backward on the joint, permitting a shearing action. Unlike most ceratopsians, their beaks did not form curved tips, but were instead rounded and flattened. If the jaws were aligned, the beaks could be used to crop objects, but if the lower jaw was retracted so that the lower beak was inside the upper beak, the jaws may have served a nutcracking function. A nut- or seed-rich diet would also match well with the gastroliths often seen in well-preserved psittacosaur skeletons. Limb function Studies by Phil Senter in 2007 conducted on P. neimongoliensis and P. mongoliensis concluded that the forelimbs of these taxa (and likely those of other Psittacosaurus species) were too short (only about 58% as long as the hindlimbs) to reach the ground, and their range of motion indicates they could neither be pronated nor generate propulsive force for locomotion, suggesting that Psittacosaurus was entirely bipedal. The forelimbs were also too short to be used in digging or bringing food to the mouth, and Senter suggested that if Psittacosaurus found it necessary to dig depressions in the ground it may have used its hindlimbs instead. The forelimbs could be used for two-handed grasping of objects or scratching the body, but due to their extremely limited flexibility and reach, they could have only been used to grasp objects very close to the belly or sides of the animal and could have scratched only the belly, flank and knees. Even though the hands could not reach the mouth, Psittacosaurus could have still used them to carry nesting material or food to a desired location. However, Psittacosaurus may not have been entirely bipedal for its entire lifespan. Taking sections from the limb bones of 16 specimens of Psittacosaurus, ranging in age from less than a year old to ten-year-old adults, Qi Zhao from the University of Bristol found that Psittacosaurus was probably secondarily bipedal. The infants' front limbs grew at faster rates than the hind limbs at between hatching and three years of age. At the age of between four and six years, arm growth slowed and leg growth accelerated as the animal became mature. At this stage, Psittacosaurs would switch to a bipedal stance. These findings further reveal that the ancestor of Psittacosaurus was likely quadrupedal and eventually gained the ability to become bipedal as it evolved, with the young retaining the quadrupedal gait of the ancestor in question. These findings also lead to the hypothesis that many such dinosaur families may have evolved along this path at some point in their evolution. Growth rate Several juvenile Psittacosaurus have been found. The smallest is a P. mongoliensis hatchling conserved in the American Museum of Natural History (AMNH), which is only 11 to 13 centimetres (4–5 inches) long, with a skull in length. Another hatchling skull at the AMNH is only long. Both specimens are from Mongolia. Juveniles discovered in the Yixian Formation are approximately the same age as the larger AMNH specimen. A histological examination of P. mongoliensis has determined the growth rate of these animals. The smallest specimens in the study were estimated at three years old and less than , while the largest were nine years old and weighed almost . This indicates relatively rapid growth compared to most reptiles and marsupial mammals, but slower than modern birds and placental mammals. An age determination study performed on the fossilized remains of P. mongoliensis by using growth ring counts suggest that the longevity of the basal ceratopsian was 10 to 11 years. Gregarious juveniles The find of a herd of six Psittacosaurus individuals killed and buried by a volcanic mudflow indicates the presence of at least two age groups from two distinct clutches gathered together. This find has been taken as evidence for group fidelity and gregariousness extending beyond the nest; the earliest such evidence for any ceratopsian. Even very young psittacosaur teeth appear worn, indicating they chewed their own food and may have been precocial. Another juvenile-only cluster shows that specimens of different ages grouped together. These juveniles may have associated together as a close knit, mixed-age herd either for protection, to enhance their foraging, or as putative helpers at the parental nest. There is no evidence for parental care. In 2004, a specimen found in the Yixian Formation was claimed as evidence for parental care in dinosaurs. The specimen DNHM D2156 consists of 34 articulated juvenile Psittacosaurus skeletons, closely associated with the skull of an adult. The juveniles, all approximately the same age, are intertwined in a group underneath the adult, although all 34 skulls are positioned above the mass of bodies, as they would have been in life. This suggests that the animals were alive at the time of burial, which must have been extremely rapid, perhaps due to the collapse of a burrow. However, a 2013 paper pointed out that the adult specimen did not belong with the nest, its skull having no sedimentary connection to the main slab where the juveniles occurred, but had been glued onto it. This artificial association led to the inference that the skull belonged to an individual, possibly a "mother", that was providing parental care for the 34 juveniles—a claim that is unfounded. Furthermore, the adult was also shown to be six years old, whereas histological studies have shown P. mongoliensis was unable to breed until it reached ten years of age. It is also unlikely that a single female would have so many offspring at one time. A 2014 analysis of the same specimen supported the association and concluded that the proximity of the six-year-old specimen to the post-hatchlings may indicate post-hatchling cooperation, making the six-year-old specimen a possible caretaker. Pathology Out of the hundreds of known Psittacosaurus specimens, only one has been described to possess any sort of pathology. The specimen in question, consisting of a complete adult skeleton and tentatively assigned to P. mongoliensis, was found in the lower beds of the Yixian Formation. There is no sign of a bone fracture, but very clear signs of an infection can be seen near the midpoint of the right fibula. The bone exhibits a large round pit, evidence of necrosis due to a lack of blood supply to the region. The pit is surrounded by a massive amount of swelling along the lower third of the bone. This large amount of bone deposited around the injury indicates that the animal survived for quite a while despite the injury and subsequent infection. As psittacosaurids were bipedal animals, a similar injury to a weight bearing bone in the leg would most likely have been fatal. Unlike the femur and tibia, the fibula is not a weight-bearing bone, so this animal would still have been able to walk to some extent. The source of the injury remains unknown. Predation Another fossil from the Yixian Formation provides direct evidence of Psittacosaurus as a prey animal. One skeleton of Repenomamus robustus, a large triconodont mammal, is preserved with the remains of a juvenile Psittacosaurus in its abdominal cavity. Several of the juvenile's bones are still articulated, indicating that the carnivorous mammal swallowed its prey in large chunks. This specimen is notable in that it is the first-known example of Mesozoic mammals preying on live dinosaurs. Heavy predation on juvenile Psittacosaurus may have resulted in R-selection, the production of more numerous offspring to counteract this loss. PaleochronologyPsittacosaurus is known from hundreds of individual specimens, of which over 75 have been assigned to the type species, P. mongoliensis. All Psittacosaurus fossils discovered so far have been found in Early Cretaceous sediments in Asia, from southern Siberia to northern China, and possibly as far south as Thailand. The most common age of geologic formations bearing Psittacosaurus fossils is from the late Barremian through Albian stages of the Early Cretaceous, or approximately 125 to 105 mya (million years ago). Many terrestrial sedimentary formations of this age in Mongolia and northern China have produced fossils of Psittacosaurus, leading to the definition of this time period in the region as the Psittacosaurus biochron. The earliest known species is P. lujiatunensis, found in the lowest beds of the Yixian Formation. Over 200 specimens attributed to this genus have been recovered from these and other beds of the Yixian, the age of which is the subject of much debate. Although many early studies using radiometric dating put the Yixian in the Jurassic Period, tens of millions of years outside of the expected temporal range of Psittacosaurus'', most recent work dates it to the Early Cretaceous. Using argon–argon dating, a team of Chinese scientists dated the lowest beds in the formation to about 128 mya, and the highest to approximately 122 mya. A more recent Chinese study, using uranium–lead dating, suggests that the lower beds are younger, approximately 123.2 mya, while agreeing with an age of 122 mya for the upper beds.
Biology and health sciences
Ornitischians
Animals
2274270
https://en.wikipedia.org/wiki/Notoungulata
Notoungulata
Notoungulata is an extinct order of ungulates that inhabited South America from the early Paleocene to the end of the Pleistocene, living from approximately 61 million to 11,000 years ago. Notoungulates were morphologically diverse, with forms resembling animals as disparate as rabbits and rhinoceroses. Notoungulata are the largest group of South American native ungulates, with over 150 genera in 14 families having been described, divided into two major subgroupings, Typotheria and Toxodontia. Notoungulates first diversified during the Eocene. Their diversity declined from the late Neogene onwards, with only the large toxodontids persisting until the end of the Pleistocene (with Mixotoxodon expanding into Central America and southern North America), perishing as part of the Late Pleistocene megafauna extinctions along with most other large mammals across the Americas. Collagen sequence analysis suggests that notoungulates are closely related to litopterns, another group of South American ungulates, and their closest living relatives being perissodactyls (odd-toed ungulates), including rhinoceroses, tapirs and equines as part of the clade Panperissodactyla. However their relationships to other South American ungulates are uncertain. Several groups of notoungulates separately evolved ever-growing cheek teeth. Taxonomy Notoungulata is divided into two major suborders, Typotheria and Toxodontia, alongside some basal groups (Notostylopidae and Henricosborniidae) which are potentially paraphyletic. Notoungulates make up over half the described diversity of indigenous South American ungulates, with over 150 genera in 14 different families. This order is proposed to be united with other South American native ungulates in the super-order Meridiungulata. The notoungulate and litoptern native ungulates of South America have been shown by studies of collagen and mitochondrial DNA sequences to be a sister group to the perissodactyls, making them true ungulates. The estimated divergence date is 66 million years ago. This conflicts with the results of some morphological analyses which posited them as afrotherians. It is in line with some more recent morphological analyses which suggested they were basal euungulates. Panperissodactyla has been proposed as the name of an unranked clade to include perissodactyls and their extinct South American ungulate relatives. Cifelli has argued that Notioprogonia is paraphyletic, as it would include the ancestors of the remaining suborders. Similarly, Cifelli indicated that Typotheria would be paraphyletic if it excluded Hegetotheria and he advocated inclusion of Archaeohyracidae and Hegetotheriidae in Typotheria. Notoungulata were for many years taken to include the order Arctostylopida, whose fossils are found mainly in China. Recent studies, however, have concluded that Arctostylopida are more properly classified as gliriforms, and that the notoungulates were therefore never found outside South and Central America. Notoungulates are united by a number of morphological characters of the skull, particularly the inner ear and teeth. Based on an analysis of 133 morphological characters in 50 notoungulate genera, Billet in 2011 concluded that Homalodotheriidae, Leontiniidae, Toxodontidae, Interatheriidae, Mesotheriidae, and Hegetotheriidae are the only monophyletic families of notoungulates. Some studies have suggested that Pyrotheria, often ranked as an independent order, should also be included within Notoungulata. Phylogeny Classification Suborder Notioprogonia (probably paraphyletic) Family Henricosborniidae Family Notostylopidae Suborder Toxodontia Family Isotemnidae Family Leontiniidae Family Notohippidae (paraphyletic) Family Toxodontidae Family Homalodotheriidae Suborder Typotheria Family Archaeohyracidae Family Archaeopithecidae Family Campanorcidae Family Hegetotheriidae Family Interatheriidae Family Mesotheriidae Family Oldfieldthomasiidae Ecology Notoungulates varied widely in body size, with early diverging notoungulates like Simpsonotus, and some hegetotheriid and interatheriid typotherians having a body mass of approximately , while the toxodontid Toxodon is suggested to have had a body mass exceeding . Typotheres generally occupied small-medium body size niches, while toxodontians were generally medium-large sized animals. The families Interatheriidae, Hegetotheriidae, Mesotheriidae and Toxodontidae separately evolved high crowned (hypsodont) ever-growing (hypeselodont) cheek teeth, with high crowned species constituting the majority of notoungulates from the Late Oligocene onward. This adaptation was historically suggested to be the result of a diet increasingly incorporating grass, but this has been questioned, and other authors suggesting that it may have been due to the increasing intake of abrasive particles from volcanic sources. Many typotheres have bodyforms convergent on rodents, hyraxes and rabbits, with some rabbit-like hegetotheriids suggested to have developed a rabbit-like bounding locomotion. The basal notungulate Notostylops and the mesotheriids are suggested to have engaged in digging, with mesotheriids suggested to have had an ecology similar to wombats. Toxodontids have sometimes been compared to rhinoceroses and hippopotamuses in overall bodyform and tooth morphology. The Miocene toxodontian Homalodotherium had claws on its forelimbs and is thought to have had an ecology similar to the extinct chalicotheres, rearing on its hindlegs to feed. Like perissodactyls, notoungulates were likely primitively hindgut fermenters, but it has also been proposed that some of them may have had fermentation more similar to ruminants based on their skeletal anatomy, though this is uncertain. Evolutionary history The oldest notoungulates appeared during the Paleocene, probably originating from "condylarth" ancestors that had migrated from North America. Notoungulates and other South American native ungulates reached their apex of diversity during the Eocene and Oligocene. Notoungulate species diversity was stable during the Miocene, though 45% of the family diversity of the group became extinct during the interval, including Homalodotheriidae, Leontiniidae, and Interatheriidae. The diversity of the group declined during the Pliocene and Pleistocene, which is coeval in time with the Great American Interchange, which allowed ungulates and other mammals from North America to enter South America. This decline has historically been attributed to competition with the new North American arrivals, though earlier views had probably overstated the importance of this, with climatic change also likely being an important factor. As part of the Great American interchange, the toxodontid Mixotoxodon migrated into Central and North America, with its furthest northern record being in Texas. The last hegetotheriids are known from the Early Pleistocene (with a supposed Middle Pleistocene record being considered questionable). The youngest known member of Typotheria, the mesotheriid Mesotherium, has its last records in the late Middle Pleistocene, around 220,000 years ago. The last notoungulates, the toxodontids Toxodon, Mixotoxodon and Piauhytherium became extinct at the end of the Late Pleistocene around 12,000 years ago as part of the Late Pleistocene megafauna extinctions, along with most other large mammals in the Americas. The extinction coincides with the arrival of the first humans to the Americas and they are suggested to have been a causal factor in the extinction.
Biology and health sciences
Mammals: General
Animals
2274399
https://en.wikipedia.org/wiki/Pterodaustro
Pterodaustro
Pterodaustro (from Greek , and Latin , ) is a genus of ctenochasmatid pterodactyloid pterosaur from South America. Its fossil remains dated back to the Early Cretaceous period, about 105 million years ago. Discovery and naming The first fossils, among them the holotype PVL 2571, a thigh bone, were discovered during the late 1960s by José Bonaparte in the Lagarcito Formation, situated in the San Luis Province of Argentina, and dating from the Albian. The genus was subsequently reported in Chile from the Quebrada La Carreta locality, in the Sierra da Candeleros, Segunda Región de Antofagsta, but this turned out to be erroneous; the fossils belong another pterosaur, the dsungaripterid Domeykodactylus ceciliae. At the Argentine site, the just large "Loma del Pterodaustro", since then, during several expeditions, over 750 Pterodaustro specimens have been collected, 288 of them having been catalogued until 2008. This makes the species one of the best known pterosaurs, with examples from all growth stages, from egg to adult. The genus was named in 1969 by José Bonaparte as an as yet undescribed nomen nudum. The first description followed in 1970, making the name valid, the type species being Pterodaustro guiñazui. The generic name is derived from Greek pteron, "wing" and Latin auster, "south (wind)". The elements are combined as a condensed pteron de austro, "wing from the south". The specific name honors paleontologist Román Guiñazú. It was amended in 1978 by Peter Wellnhofer into guinazui, because diacritical signs such as the tilde are not allowed in specific names. Description Pterodaustro has a very elongated skull, up to long. The portion in front of the eye sockets comprises 85 percent of skull length. The long snout and lower jaws curve strongly upwards; the tangent at the point of the snout is perpendicular to that of the jaw joint. Pterodaustro has about a thousand bristle-like modified teeth in its lower jaws that might have been used to strain crustaceans, plankton, algae, and other small creatures from the water. These teeth stand for the most part not in separate alveoli but in two long grooves parallel to the edges of the jaw. They have a length of and are oval in cross-section, with a width of just . At first it was suspected these structures were not true teeth at all, but later research established they were built like normal teeth, including enamel, dentine and a pulp. Despite being made of very hard material, they might still have been flexible to some extent due to their extreme length-width ratio, a bend of up to 45 degrees being possible. The upper jaws also carried teeth, but these were very small with a flat conical base and a spatula-formed crown. These teeth also do not have separate tooth sockets but were apparently held by ligaments in a special tooth pad, that was also covered with small ossicles, or bone plates. It appears that they were not replaced, unlike the teeth of most other reptiles. The back of the skull was also rather elongated and in a low position; there are some indications for a low parietal crest. Pterodaustro had a maximum adult wingspan of approximately and a maximum body mass of approximately . Its hindlimbs are rather robust and its feet large. Its tail is uniquely elongated for a pterodactyloid, containing twenty-two caudal vertebrae, whereas other members of this group have at most, sixteen. Paleobiology Pterodaustro probably strained food with its tooth comb, a method called "filter feeding", also practised by modern flamingos. Once it caught its food, Pterodaustro probably mashed it with the small, globular teeth present in its upper jaw. Like other ctenochasmatoids, Pterodaustro has a long torso and proportionally massive and splayed hindfeet, adaptations for swimming. A recent study suggested that its ankle facilitated movements required for wading behavior. Robert Bakker suggested that, like flamingos, this pterosaur's diet may have resulted in a pink hue. At least two specimens of Pterodaustro have been found, MIC V263 and MIC V243, with gizzard stones in the stomach cavity, the first ever reported for any pterosaur. These clusters of small stones with angled edges support the idea that Pterodaustro ate mainly small, hard-shelled aquatic crustaceans using filter-feeding. Such invertebrates are abundant in the sediment of the fossil site. A study of the growth stages of Pterodaustro concluded that juveniles grew relatively fast in their first two years, attaining about half of the adult size. Then they reached sexual maturity, growing at a slower rate for four to five years until there was a determinate growth stop. In 2004 a Pterodaustro embryo in an egg was reported, specimen MHIN-UNSL-GEO-V246. The egg was elongated, long and across, and its mainly flexible shell was covered with a thin layer of calcite, 0.3 millimeters thick. Three-dimensionally preserved eggs were reported in 2014. Comparisons between the scleral rings of Pterodaustro and modern birds and reptiles suggest that it may have been nocturnal and similar in activity patterns to modern anseriform birds that feed at night, although method of this research is questioned by some researchers. Because of its long torso and neck and comparatively short legs, Pterodaustro was unique among pterosaurs in having difficulties to launch. Even with the pterosaurian quadrupedal launching mechanism, it would have required frantic and fairly-low angled take-offs possible only in open areas, much like modern geese and swans. Phylogeny Bonaparte in 1970 assigned Pterodaustro to the Pterodactylidae; in 1971 to a Pterodaustriidae. However, from 1996 cladistic studies by Alexander Kellner and David Unwin have shown a position within the family Ctenochasmatidae, together with other filter feeders. In 2018, a topology by Longrich, Martill and Andres recovered Pterodaustro within the family Ctenochasmatidae, more precisely within the tribe called Pterodaustrini, in a more basal position than Beipiaopterus and Gegepterus.
Biology and health sciences
Pterosaurs
Animals
17598541
https://en.wikipedia.org/wiki/Blueberry
Blueberry
Blueberries are a widely distributed and widespread group of perennial flowering plants with blue or purple berries. They are classified in the section Cyanococcus within the genus Vaccinium. Commercial blueberries—both wild (lowbush) and cultivated (highbush)—are all native to North America. The highbush varieties were introduced into Europe during the 1930s. Blueberries are usually prostrate shrubs that can vary in size from to in height. In commercial production of blueberries, the species with small, pea-size berries growing on low-level bushes are known as "lowbush blueberries" (synonymous with "wild"), while the species with larger berries growing on taller, cultivated bushes are known as "highbush blueberries". Canada is the leading producer of lowbush blueberries, while the United States produces some 40% of the world's supply of highbush blueberries. Description Many species of blueberries grow wild in North America, including Vaccinium myrtilloides, V. angustifolium and V. corymbosum, which grow on forest floors or near swamps. Wild blueberries reproduce by cross pollination, with each seed producing a plant with a different genetic composition, causing within the same species differences in growth, productivity, color, leaf characteristics, disease resistance, flavor, and other fruit characteristics. The mother plant develops underground stems called rhizomes, allowing the plant to form a network of rhizomes creating a large patch (called a clone) which is genetically distinct. Floral and leaf buds develop intermittently along the stems of the plant, with each floral bud giving rise to 5–6 flowers and the eventual fruit. Wild blueberries prefer an acidic soil between 4.2 and 5.2 pH and only moderate amounts of moisture. They have a hardy cold tolerance in their range in Canada and northern United States. Fruit productivity of lowbush blueberries varies by the degree of pollination, genetics of the clone, soil fertility, water availability, insect infestation, plant diseases and local growing conditions. Wild (lowbush) blueberries have an average mature weight of . Lowbush blueberries, sometimes called "wild blueberries", are generally not planted by farmers, but rather are managed on berry fields called "barrens". Cultivated highbush blueberries prefer sandy or loam soils, having shallow root systems that benefit from mulch and fertilizer. The leaves of highbush blueberries can be either deciduous or evergreen, ovate to lanceolate, and long and broad. The flowers are bell-shaped, white, pale pink or red, sometimes tinged greenish. The fruit is a berry in diameter with a flared crown at the end; they are pale greenish at first, then reddish-purple, and finally uniformly blue when ripe. They are covered in a protective coating of powdery epicuticular wax, colloquially known as the "bloom". They generally have a sweet taste when mature, with variable acidity. Blueberry bushes typically bear fruit in the middle of the growing season: fruiting times are affected by local conditions such as climate, altitude and latitude, so the time of harvest in the northern hemisphere can vary from May to August. Identification Commercially offered blueberries are usually from species that naturally occur only in eastern and north-central North America. Other sections in the genus are native to other parts of the world, including the Pacific Northwest and southern United States, South America, Europe and Asia. Other wild shrubs in many of these regions produce similar-looking edible berries, such as huckleberries and whortleberries (North America) and bilberries (Europe). These species are sometimes called "blueberries" and are sold as blueberry jam or other products. The names of blueberries in languages other than English often translate as "blueberry", e.g. Scots blaeberry and Norwegian blåbær. Blaeberry, blåbær and French myrtilles usually refer to the European native V. myrtillus (bilberry), while bleuets refers to the North American blueberry. Cyanococcus blueberries can be distinguished from the nearly identical-looking bilberries by their flesh color when cut in half. Ripe blueberries have light green flesh, while bilberries, whortleberries and huckleberries are red or purple throughout. Species Note: habitat and range summaries are from the Flora of New Brunswick, published in 1986 by Harold R. Hinds, and Plants of the Pacific Northwest coast, published in 1994 by Pojar and MacKinnon. Vaccinium angustifolium (lowbush blueberry): acidic barrens, bogs and clearings, Manitoba to Labrador, south to Nova Scotia; and in the United States, from Maine westward to Iowa and southward to Virginia Vaccinium boreale (northern blueberry): peaty barrens, Quebec and Labrador (rare in New Brunswick), south to New York and Massachusetts Vaccinium caesariense (New Jersey blueberry) Vaccinium corymbosum (northern highbush blueberry) Vaccinium darrowii (evergreen blueberry) Vaccinium elliottii (Elliott blueberry) Vaccinium formosum (southern blueberry) Vaccinium fuscatum (black highbush blueberry; syn. V. atrococcum) Vaccinium hirsutum (hairy-fruited blueberry) Vaccinium myrsinites (shiny blueberry) Vaccinium myrtilloides (sour top, velvet leaf, or Canadian blueberry) Vaccinium pallidum (dryland blueberry) Vaccinium simulatum (upland highbush blueberry) Vaccinium tenellum (southern blueberry) Vaccinium virgatum (rabbiteye blueberry; syn. V. ashei) Some other blue-fruited species of Vaccinium: Vaccinium koreanum (Korean blueberry) Vaccinium myrtillus (bilberry or European blueberry) Vaccinium uliginosum (bog bilberry/blueberry, northern bilberry or western blueberry) The lowbush varieties are V. angustifolium, V. boreale, V. mytilloides, V. pallidum, and V. angustifolium × V. corymbosum. They are still grown in a similar manner to pre-Columbian semi-wild cultivation, i.e. slash and burn. The highbush varieties are darrowii and corymbosum. Rabbiteye (V. ashei/V. virgatum) is considered different from both high- and lowbush. Distribution Vaccinium has a mostly circumpolar distribution, with species mainly present in North America, Europe, and Asia. Many commercially available species with English common names including "blueberry" are from North America, particularly Atlantic Canada and the northeastern United States for wild (lowbush) blueberries, and several US states and British Columbia for cultivated (highbush) blueberries. North American native species of blueberries are grown commercially in the Southern Hemisphere in Australia, New Zealand and South American nations. Vaccinium meridionale (the Andean blueberry) is wild-harvested and commonly available locally. Several other wild shrubs of the genus Vaccinium also produce commonly eaten blue berries, such as the predominantly European V. myrtillus and other bilberries, which in many languages have a name that translates to "blueberry" in English. Cultivation Blueberries may be cultivated, or they may be picked from semiwild or wild bushes. In North America, the most common cultivated species is V. corymbosum, the northern highbush blueberry. Hybrids of this with other Vaccinium species adapted to southern U.S. climates are known collectively as southern highbush blueberries. Highbush blueberries were first cultivated in New Jersey around the beginning of the 20th century. So-called "wild" (lowbush) blueberries, smaller than cultivated highbush ones, have intense color. V. angustifolium (lowbush blueberry) is found from the Atlantic provinces westward to Quebec and southward to Michigan and West Virginia. In some areas, it produces natural "blueberry barrens", where it is the dominant species covering large areas. Several First Nations communities in Ontario are involved in harvesting wild blueberries. "Wild" has been adopted as a marketing term for harvests of managed native stands of lowbush blueberries. The bushes are not planted or selectively bred, but they are pruned or burned over every two years, and pests are "managed". Numerous highbush cultivars of blueberries are available, with diversity among them, each having individual qualities. A blueberry breeding program has been established by the USDA-ARS breeding program at Beltsville, Maryland, and Chatsworth, New Jersey. This program began when Frederick Vernon Coville of the USDA-ARS collaborated with Elizabeth Coleman White of New Jersey. In the early part of the 20th century, White offered pineland residents cash for wild blueberry plants with unusually large fruit. After 1910 Coville began to work on blueberry, and was the first to discover the importance of soil acidity (blueberries need highly acidic soil), that blueberries do not self-pollinate, and the effects of cold on blueberries and other plants. In 1911, he began a program of research in conjunction with White, daughter of the owner of the extensive cranberry bogs at Whitesbog in the New Jersey Pine Barrens. His work doubled the size of some strains' fruit, and by 1916, he had succeeded in cultivating blueberries, making them a valuable crop in the Northeastern United States. For this work he received the George Roberts White Medal of Honor from the Massachusetts Horticultural Society. The rabbiteye blueberry (Vaccinium virgatum syn. V. ashei) is a southern type of blueberry produced from the Carolinas to the Gulf Coast states. Production of rabbiteye blueberries was a focus in Texas in the early 21st century. Other important species in North America include V. pallidum, the hillside or dryland blueberry. It is native to the eastern U.S., and common in the Appalachians and the Piedmont of the Southeast. Sparkleberry, V. arboreum, is a common wild species on sandy soils in the Southeast. Successful blueberry cultivation requires attention to soil pH (acidity) measurements in the acidic range. Blueberry bushes often require supplemental fertilization, but over-fertilization with nitrogen can damage plant health, as evidenced by nitrogen-burn visible on the leaves. Growing regions Significant production of highbush blueberries occurs in British Columbia, Maryland, Western Oregon, Michigan, New Jersey, North Carolina, and Washington. The production of southern highbush varieties occurs in California, as varieties originating from University of Florida, Connecticut, New Hampshire, North Carolina State University and Maine have been introduced. Peru, Spain, and Mexico also have significant production, as of 2018 (see Production). United States In 2018, Oregon produced the most cultivated blueberries, recording , an amount slightly exceeding the production by Washington. In descending order of production volume for 2017, other major producers were Georgia, Michigan, New Jersey, California, and North Carolina. Hammonton, New Jersey, claims to be the "Blueberry Capital of the World", with over 80% of New Jersey's cultivated blueberries coming from this town. Every year the town hosts a large festival, which draws thousands of people to celebrate the fruit. Maine is known for its wild blueberries, but the state's lowbush (wild) and highbush blueberries combined account for 10% of all blueberries grown in North America. Some are farmed, but only half of this acreage is harvested each year due to variations in pruning practices. The wild blueberry is the official fruit of Maine. Canada Canadian production of wild and cultivated blueberries in 2015 was 166,000 tonnes valued at $262 million, the largest fruit crop produced nationally accounting for 29% of all fruit value. British Columbia was the largest Canadian producer of cultivated blueberries, yielding 70,000 tonnes in 2015, the world's largest production of blueberries by region. Atlantic Canada contributes approximately half of the total North American wild/lowbush annual production with New Brunswick having the largest in 2015, an amount expanding in 2016. Nova Scotia, Prince Edward Island and Québec are also major producers. Nova Scotia recognizes the wild blueberry as its official provincial berry, with the town of Oxford, Nova Scotia known as the Wild Blueberry Capital of Canada. Québec is a major producer of wild blueberries, especially in the regions of Saguenay-Lac-Saint-Jean (where a popular name for inhabitants of the regions is bleuets, or "blueberries") and Côte-Nord, which together provide 40% of Québec's total provincial production. This wild blueberry commerce benefits from vertical integration of growing, processing, frozen storage, marketing and transportation within relatively small regions of the province. On average, 80% of Québec wild blueberries are harvested on farms (), the remaining 20% being harvested from public forests (). Some 95% of the wild blueberry crop in Québec is frozen for export out of the province. Europe Highbush blueberries were first introduced to Germany, Sweden and the Netherlands in the 1930s, and have since been spread to numerous other countries of Europe. V. corymbosum only began to be cultivated in Romania in a few years leading up to 2018 and rapidly increased in production and sales in that time (as with berries in general). it remains relatively unmolested by pests and diseases (see Diseases below). Southern Hemisphere In the Southern Hemisphere, Brazil, Chile, Argentina, Peru, Uruguay, New Zealand, Australia, South Africa, and Zimbabwe grow blueberries commercially. In Brazil, blueberries are produced in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Blueberries were first introduced to Australia in the 1950s, but the effort was unsuccessful. In the early 1970s, the Victorian Department of Agriculture imported seed from the U.S. and a selection trial was started. This work was continued into the mid-1970s when the Australian Blueberry Growers' Association was formed. In the 21st century, the industry grew in Argentina: "Argentine blueberry production has increased over the last three years with planted area up to 400 percent," according to a 2005 report by the U.S. Department of Agriculture. "Argentine blueberry production has thrived in four different regions: the province of Entre Rios in northeastern Argentina, the province of Tucuman, the province of Buenos Aires and the southern Patagonian valleys", according to the report. In the Bureau of International Labor Affairs report of 2014 on child labor and forced labor, blueberries were listed among the goods produced in such working conditions in Argentina. Pests and diseases Diseases V. corymbosum remains relatively unmolested by pests and diseases in Romania, with Phytophthora cinnamomi, Monilinia vaccinii-corymbosi, Botryosphaeria corticis, Godronia cassandrae, Phomopsis sp., Botrytis cinerea, Naohidemyces vaccinii, Microsphaera penicillata var. vaccinii, and various viruses being the most common. Pest management Pesticides DDT began to be used in blueberry soon after its discovery in 1939, and a few years later in the mid-1940s research began into its use in North America. Because "wild" is a marketing term generally used for all low-bush blueberries, it is not an indication that such blueberries are free from pesticides. Insecticide modes of action must be varied to avoid encouraging resistance in the invasive pest Drosophila suzukii. Some insecticides can be counterproductive, harming natural enemies of pests as well. For example, treatment for Illinoia pepperi can reduce populations of its predators. Kaolin clay for Rhagoletis mendax also reduced effectiveness of Diachasma alloeum, its parasitoid. The pest predator Harpalus erraticus maintains greater abundance with selective insecticides rather than broad-spectrum MoAs. Integrated pest management Blueberries are naturally relatively unmolested by arthropod pests. Nonetheless, there are 24 insect taxa known to be pest (organism)s in North America, the worst in New Jersey, Michigan, Maine, and Eastern Canada being Rhagoletis mendax. Secondary but still important are Acrobasis vaccinii, Grapholita packardi, and Conotrachelus nenuphar. These four are the most common targets for development of IPM practices. , IPM research has also taken an interest in Drosophila suzukii and arthropods like aphids (that vector diseases such as scorch virus and shoestring virus) and cicadellids (vectoring the phytoplasma that causes blueberry stunt). Managing pests down to the cosmetic level is necessary in this fruit because they are a premium type product. Changes in locale and environment – to new geographies, and into greenhouses – has required new pest management regimes, including innovative IPM. Conversely, importing foreign potential enemies into North America may yield good results: Operophtera brumata is a pest of blueberries and birches which is successfully parasitized by Cyzenis albicans despite the lack of historical, natural contact between the two. The same results were obtained with Scirtothrips citri and Beauveria bassiana. Results are available for Choristoneura rosaceana and overwhelming numbers of Trichogramma minutum, and Cyclocephala longula overwhelmed by Steinernema scarabaei. This has also been attempted with flower thrips and potential predators but with inconclusive results. International quarantine Rhagoletis mendax is a quarantine pest in phytosanitary regimes of some countries around the world. Resistant cultivars Insect resistance was not a priority in breeding programs until about the year 2000, and is still not a high priority. However it may become more common as it becomes easier, especially using marker-assisted breeding. V. ashei is naturally more resistant than V. corymbosum to Scaphytopius magdalensis. V. ashei is less resistant than V. darrowii to Prodiplosis vaccinia. There is variation between cultivars of V. ashei in resistance to Oberea myops. There is variation in resistance among cultivars of V. corymbosum to Acrobasis vaccinii and Popillia japonica. Wild V. spp. have greater resistance than highbush cultivars to I. pepperi. There is significant variation between highbush cultivars in abundance of various Tephritidae, thrips, and Homalodisca vitripennis. Production In 2021, world production of blueberries (lowbush and highbush combined) was 1.1 million tonnes, led by the United States with 32% of global production, Peru with 20%, and Canada with 13%. In 2019, Canada was the largest producer of wild blueberries, mainly in Quebec and the Atlantic provinces, but Canadian production of wild blueberries decreased since 2017 by transitioning to the more profitable cultivated highbush blueberries. British Columbia produced 93% of the Canadian highbush blueberry crop in 2019. Regulations Canada No. 1 blueberries are all similar in size, shape, weight, and color—the total product can be no more than ten percent off-color and three percent otherwise defective. Uses First Nations peoples of Canada consumed wild blueberries for millennia. Blueberries are sold fresh or are processed as individually quick frozen fruit, purée, juice, or dried or infused berries. These may then be used in a variety of consumer goods, such as jellies, jams, pies, muffins, snack foods, pancakes, or as an additive to breakfast cereals. Blueberry jam is made from blueberries, sugar, water, and fruit pectin. Blueberry sauce is a sweet sauce prepared using blueberries as a primary ingredient. Blueberry wine is made from the flesh and skin of the berries, which is fermented and then matured; usually the lowbush variety is used. Nutrients Blueberries consist of 14% carbohydrates, 0.7% protein, 0.3% fat and 84% water. They contain only negligible amounts of micronutrients, with moderate levels (relative to respective Daily Values) (DV) of the essential dietary mineral manganese, vitamin C, vitamin K and dietary fiber. Generally, nutrient contents of blueberries are a low percentage of the DV. A 100-gram serving provides a relatively low amount of food energy – – with a glycemic load of 6. Phytochemicals and research Blueberries contain anthocyanins, other polyphenols and various phytochemicals under preliminary research for their potential biological effects. Most polyphenol studies have been conducted using the highbush cultivar of blueberries (V. corymbosum), while content of polyphenols and anthocyanins in lowbush (wild) blueberries (V. angustifolium) exceeds values found in highbush cultivars.
Biology and health sciences
Ericales
null
17599355
https://en.wikipedia.org/wiki/White
White
White is the lightest color and is achromatic (having no chroma). It is the color of objects such as snow, chalk, and milk, and is the opposite of black. White objects fully reflect and scatter all the visible wavelengths of light. White on television and computer screens is created by a mixture of red, blue, and green light. The color white can be given with white pigments, especially titanium dioxide. In ancient Egypt and ancient Rome, priestesses wore white as a symbol of purity, and Romans wore white togas as symbols of citizenship. In the Middle Ages and Renaissance a white unicorn symbolized chastity, and a white lamb sacrifice and purity. It was the royal color of the kings of France as well as the flag of monachist France from 1815 to 1830, and of the monarchist movement that opposed the Bolsheviks during the Russian Civil War (1917–1922). Greek temples and Roman temples were faced with white marble, and beginning in the 18th century, with the advent of neoclassical architecture, white became the most common color of new churches, capitols, and other government buildings, especially in the United States. It was also widely used in 20th century modern architecture as a symbol of modernity and simplicity. According to surveys in Europe and the United States, white is the color most often associated with perfection, the good, honesty, cleanliness, the beginning, the new, neutrality, and exactitude. White is an important color for almost all world religions. The pope, the head of the Roman Catholic Church, has worn white since 1566, as a symbol of purity and sacrifice. In Islam, and in the Shinto religion of Japan, it is worn by pilgrims. In Western cultures and in Japan, white is the most common color for wedding dresses, symbolizing purity and virginity. In many Asian cultures, white is also the color of mourning. Etymology The word white continues Old English , ultimately from a Common Germanic also reflected in OHG , ON , Goth. . The root is ultimately from Proto-Indo-European language , surviving also in Sanskrit "to be white or bright" and Slavonic "light". The Icelandic word for white, , is directly derived from the Old Norse form of the word . Common Germanic also had the word *blankaz ("white, bright, blinding"), borrowed into Late Latin as *blancus, which provided the source for Romance words for "white" (Catalan, Occitan and French blanc, Spanish blanco, Italian bianco, Galician-Portuguese branco, etc.). The antonym of white is black. Some non-European languages have a wide variety of terms for white. The Inuit language has seven different words for seven different nuances of white. Sanskrit has specific words for bright white, the white of teeth, the white of sandalwood, the white of the autumn moon, the white of silver, the white of cow's milk, the white of pearls, the white of a ray of sunlight, and the white of stars. Japanese has six different words, depending upon brilliance or dullness, or if the color is inert or dynamic. History and art Prehistoric and ancient history White was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. Paleolithic artists used calcite or chalk, sometimes as a background, sometimes as a highlight, along with charcoal and red and yellow ochre in their vivid cave paintings. In ancient Egypt, white was connected with the goddess Isis. The priests and priestesses of Isis dressed only in white linen, and it was used to wrap mummies. In Greece and other ancient civilizations, white was often associated with mother's milk. In Greek mythology, the chief god Zeus was nourished at the breast of the nymph Amalthea. In the Talmud, milk was one of four sacred substances, along with wine, honey, and the rose. The ancient Greeks saw the world in terms of darkness and light, so white was considered a fundamental color. According to Pliny the Elder in his Natural History, Apelles (4th century BC) and the other famous painters of ancient Greece used only four colors in their paintings; white, red, yellow and black. For painting, the Greeks used the highly toxic pigment lead white, made by a long and laborious process. A plain white toga, known as a toga virilis, was worn for ceremonial occasions by all Roman citizens over the age of 14–18. Magistrates and certain priests wore a toga praetexta, with a broad purple stripe. In the time of the Emperor Augustus, no Roman man was allowed to appear in the Roman forum without a toga. The ancient Romans had two words for white; albus, a plain white, (the source of the word albino); and candidus, a brighter white. A man who wanted public office in Rome wore a white toga brightened with chalk, called a toga candida, the origin of the word candidate. The Latin word candere meant to shine, to be bright. It was the origin of the words candle and candid. In ancient Rome, the priestesses of the goddess Vesta dressed in white linen robes, a white palla or shawl, and a white veil. They protected the sacred fire and the penates of Rome. White symbolized their purity, loyalty, and chastity. Postclassical history The early Christian church adopted the Roman symbolism of white as the color of purity, sacrifice and virtue. It became the color worn by priests during Mass, the color worn by monks of the Cistercian Order, and, under Pope Pius V, a former monk of the Dominican Order, it became the official color worn by the pope himself. Monks of the Order of Saint Benedict dressed in the white or gray of natural undyed wool, but later changed to black, the color of humility and penitence. Postclassical history art, the white lamb became the symbol of the sacrifice of Christ on behalf of mankind. John the Baptist described Christ as the lamb of God, who took the sins of the world upon himself. The white lamb was the center of one of the most famous paintings of the Medieval period, the Ghent Altarpiece by Jan van Eyck. White was also the symbolic color of the transfiguration. The Gospel of Saint Mark describes Jesus' clothing in this event as "shining, exceeding white as snow." Artists such as Fra Angelico used their skill to capture the whiteness of his garments. In his painting of the transfiguration at the Convent of Saint Mark in Florence, Fra Angelico emphasized the white garment by using a light gold background, placed in an almond-shaped halo. The white unicorn was a common subject of Postclassical history manuscripts, paintings and tapestries. It was a symbol of purity, chastity and grace, which could only be captured by a virgin. It was often portrayed in the lap of the Virgin Mary. During the Postclassical history, painters rarely ever mixed colors; but in the Renaissance, the influential humanist and scholar Leon Battista Alberti encouraged artists to add white to their colors to make them lighter, brighter, and to add hilaritas, or gaiety. Many painters followed his advice, and the palette of the Renaissance was considerably brighter. Modern history Until the 16th century, white was commonly worn by widows as a color of mourning. The widows of the kings of France wore white until Anne of Brittany in the 16th century. A white tunic was also worn by many knights, along with a red cloak, which showed the knights were willing to give their blood for the king or Church. 18th and 19th centuries White was the dominant color of architectural interiors in the Baroque period and especially the Rococo style that followed it in the 18th century. Church interiors were designed to show the power, glory and wealth of the church. They seemed to be alive, filled with curves, asymmetry, mirrors, gilding, statuary and reliefs, unified by white. White was also a fashionable color for both men and women in the 18th century. Men in the aristocracy and upper classes wore powdered white wigs and white stockings, and women wore elaborate embroidered white and pastel gowns. After the French Revolution, a more austere white (blanc cassé) became the most fashionable color in women's costumes which were modeled after the outfits of Ancient Greece and Republican Rome. Because of the rather revealing design of these dresses, the women wearing them were called les merveilleuses (the marvellous) by French men of that era. The Empire style under Emperor Napoléon I was modeled after the more conservative outfits of Ancient Imperial Rome. The dresses were high in fashion but low in warmth considering the more severe weather conditions of northern France; in 1814 the former wife of Napoleon, Joséphine de Beauharnais, caught pneumonia and died after taking a walk in the cold night air with Tsar Alexander I of Russia. White was the universal color of both men and women's underwear and of sheets in the 18th and 19th centuries. It was unthinkable to have sheets or underwear of any other color. The reason was simple; the manner of washing linen in boiling water caused colors to fade. When linen was worn out, it was collected and turned into high-quality paper. The 19th-century American painter James McNeill Whistler (1834–1903), working at the same time as the French impressionists, created a series of paintings with musical titles where he used color to create moods, the way composers used music. His painting Symphony in White No. 1 – The White Girl, which used his mistress Joanna Hiffernan as a model, used delicate colors to portray innocence and fragility, and a moment of uncertainty. 20th and 21st centuries The White movement was the opposition that formed against the Bolsheviks during the Russian Civil War, which followed the Russian Revolution in 1917. It was finally defeated by the Bolsheviks in 1921–22, and many of its members emigrated to Europe. At the end of the 19th century, lead white was still the most popular pigment; but between 1916 and 1918, chemical companies in Norway and the United States began to produce titanium white, made from titanium oxide. It had first been identified in the 18th century by the German chemist Martin Klaproth, who also discovered uranium. It had twice the covering power of lead white, and was the brightest white pigment known. By 1945, 80 percent of the white pigments sold were titanium white. The absoluteness of white appealed to modernist painters. It was used in its simplest form by the Russian suprematist painter Kazimir Malevich in his 1917 painting 'the white square,' the companion to his earlier 'black square.' It was also used by the Dutch modernist painter Piet Mondrian. His most famous paintings consisted of a pure white canvas with grid of vertical and horizontal black lines and rectangles of primary colors. Black and white also appealed to modernist architects, such as Le Corbusier (1887–1965). He said a house was "a machine for living in" and called for a "calm and powerful architecture" built of reinforced concrete and steel, without any ornament or frills. Almost all the buildings of contemporary architect Richard Meier, such as his museum in Rome to house the ancient Roman Ara Pacis, or Altar of Peace, are stark white, in the tradition of Le Corbusier. Scientific understanding (color science) Light is perceived by the human visual system as white when the incoming light to the eye stimulates all three types of color sensitive cone cells in the eye in roughly equal amounts. Materials that do not emit light themselves appear white if their surfaces reflect back most of the light that strikes them in a diffuse way. White light In 1666, Isaac Newton demonstrated that white light was composed of multiple colors by passing it through a prism to break it up into components then using a second prism to reassemble them. Before Newton, most scientists believed that white was the fundamental color of light. White light can be generated by the sun, by stars, or by earthbound sources such as fluorescent lamps, white LEDs and incandescent bulbs. On the screen of a color television or computer, white is produced by mixing the primary colors of light: red, green and blue (RGB) at full intensity, a process called additive mixing (see image above). White light can be fabricated using light with only two wavelengths, for instance by mixing light from a red and cyan laser or yellow and blue lasers. This light will however have very few practical applications since color rendering of objects will be greatly distorted. The fact that light sources with vastly different spectral power distributions can result in a similar sensory experience is due to the way the light is processed by the visual system. One color that arises from two different spectral power distributions is called a metamerism. Many of the light sources that emit white light emit light at almost all visible wavelengths (sun light, incandescent lamps of various Color temperatures). This has led to the notion that white light can be defined as a mixture of "all colors" or "all visible wavelengths". A range of spectral distributions of light sources can be perceived as white—there is no single, unique specification of "white light". For example, when buying a "white" light bulb, one might buy one labeled 2700K, 6000K, etc., which produce light having very different spectral distributions, and yet this will not prevent the user from identifying the color of objects that those light bulbs illuminate. White objects Color vision allows us to distinguish different objects by their color. In order to do so, color constancy can keep the perceived color of an object relatively unchanged when the illumination changes among various broad (whitish) spectral distributions of light. The same principle is used in photography and cinematography where the choice of white point determines a transformation of all other color stimuli. Changes in or manipulation of the white point can be used to explain some optical illusions such as The dress. While there is no single, unique specification of "white light", there is indeed a unique specification of "white object", or, more specifically, "white surface". A perfectly white surface diffusely reflects (scatters) all visible light that strikes it, without absorbing any, irrespective of the light's wavelength or spectral distribution. Since it does not absorb any of the incident light, white is the lightest possible color. If the reflection is not diffuse but rather specular, this describes a mirror rather than a white surface. Reflection of 100% of incident light at all wavelengths is a form of uniform reflectance, so white is an achromatic color, meaning a color without hue. The color stimulus produced by the perfect diffuser is usually considered to be an achromatic stimulus for all illuminants, except for those whose light sources appear to be highly chromatic. Color constancy is achieved by chromatic adaptation. The International Commission on Illumination defines white (adapted) as "a color stimulus that an observer who is [chromatically] adapted to the viewing environment would judge to be perfectly achromatic and to have a luminance factor of unity. The color stimulus that is considered to be the adapted white may be different at different locations within a scene. White features in nature Beaches with sand containing high amounts of quartz or eroded limestone also appear white, since quartz and limestone reflect or scatter sunlight, rather than absorbing it. Tropical white sand beaches may also have a high quantity of white calcium carbonate from tiny bits of seashells ground to fine sand by the action of the waves. The White Cliffs of Dover take their white color from the large amount of chalk, made of limestone, which they contain, which reflects the sunlight. Snow is a mixture of air and tiny ice crystals. When white sunlight enters snow, very little of the spectrum is absorbed; almost all of the light is reflected or scattered by the air and water molecules, so the snow appears to be the color of sunlight, white. Sometimes the light bounces around inside the ice crystals before being scattered, making the snow seem to sparkle. In the case of glaciers, the ice is more tightly pressed together and contains little air. As sunlight enters the ice, more light of the red spectrum is absorbed, so the light scattered will be bluish. Clouds are white for the same reason as ice. They are composed of water droplets or ice crystals mixed with air, very little light that strikes them is absorbed, and most of the light is scattered, appearing to the eye as white. Shadows of other clouds above can make clouds look gray, and some clouds have their own shadow on the bottom of the cloud. Many mountains with winter or year-round snow cover are named accordingly: Mauna Kea means white mountain in Hawaiian, Mont Blanc means white mountain in French. Changbai Mountains literally meaning perpetually white mountains, marks the border between China and Korea. White materials Chalk is a type of limestone, made of the mineral calcite, or calcium carbonate. It was originally deposited under the sea as the scales or plates of tiny micro-organisms called Coccolithophore. It was the first white pigment used by prehistoric artists in cave paintings. The chalk used on blackboards today is usually made of gypsum or calcium sulphate, a powder pressed into sticks. Bianco di San Giovanni is a pigment used in the Renaissance, which was described by the painter Cennino Cennini in the 15th century. It is similar to chalk, made of calcium carbonate with calcium hydroxide. It was made of dried lime which was made into a powder, then soaked in water for eight days, with the water changed each day. It was then made into cakes and dried in the sun. Lead white was being produced during the 4th century BC; the process is described is Pliny the Elder, Vitruvius and the ancient Greek author Theophrastus. Pieces of lead were put into clay pots which had a separate compartment filled with vinegar. The pots in turn were piled on shelves close to cow dung. The combined fumes of the vinegar and the cow dung caused the lead to corrode into lead carbonate. It was a slow process which could take a month or more. It made an excellent white and was used by artists for centuries, but it was also toxic. It was replaced in the 19th century by zinc white and titanium white. Titanium white is the most popular white for artists today; it is the brightest available white pigment, and has twice the coverage of lead white. It first became commercially available in 1921. It is made out of titanium dioxide, from the minerals brookite, anatase, rutile, or ilmenite, currently the major source. Because of its brilliant whiteness, it is used as a colorant for most toothpaste and sunscreen. Zinc white is made from zinc oxide. It is similar to but not as opaque as titanium white. It is added to some foods to enrich them with zinc, an important nutrient. Chinese white is a variety of zinc white made for artists. Some materials can be made to look "whiter than white", this is achieved using optical brightener agents (OBA). These are chemical compounds that absorb light in the ultraviolet and violet region (usually 340–370 nm) of the electromagnetic spectrum, and re-emit light in the blue region (typically 420–470 nm). OBAs are often used in paper and clothing to create an impression of very bright white. This is due to the fact that the materials actually send out more visible light than they receive. Bleach and bleaching Bleaching is a process for whitening fabrics which has been practiced for thousands of years. Sometimes it was simply a matter of leaving the fabric in the sun, to be faded by the bright light. In the 18th century several scientists developed varieties of chlorine bleach, including sodium hypochlorite and calcium hypochlorite (bleaching powder). Bleaching agents that do not contain chlorine most often are based on peroxides, such as hydrogen peroxide, sodium percarbonate and sodium perborate. While most bleaches are oxidizing agents, a fewer number are reducing agents such as sodium dithionite. Bleaches attack the chromophores, the part of a molecule which absorbs light and causes fabrics to have different colors. An oxidizing bleach works by breaking the chemical bonds that make up the chromophore. This changes the molecule into a different substance that either does not contain a chromophore, or contains a chromophore that does not absorb visible light. A reducing bleach works by converting double bonds in the chromophore into single bonds. This eliminates the ability of the chromophore to absorb visible light. Sunlight acts as a bleach through a similar process. High energy photons of light, often in the violet or ultraviolet range, can disrupt the bonds in the chromophore, rendering the resulting substance colorless. Some detergents go one step further; they contain fluorescent chemicals which glow, making the fabric look literally whiter than white. In the natural world Astronomy A white dwarf is a stellar remnant composed mostly of electron-degenerate matter. They are very dense; a white dwarf's mass is comparable to that of the Sun and its volume is comparable to that of the Earth. Its faint luminosity comes from the emission of stored thermal energy. A white dwarf is very hot when it is formed, but since it has no source of energy, it will gradually radiate away its energy and cool down. This means that its radiation, which initially has a high color temperature, will lessen and redden with time. Over a very long time, a white dwarf will cool to temperatures at which it will no longer emit significant heat or light, and it will become a cold black dwarf. However, since no white dwarf can be older than the Age of the universe (approximately 13.8 billion years), even the oldest white dwarfs still radiate at temperatures of a few thousand kelvins, and no black dwarfs are thought to exist yet. An A-type main-sequence star (A V) or A dwarf star is a main-sequence (hydrogen-burning) star of spectral type A and luminosity class V. These stars have spectra which are defined by strong hydrogen Balmer absorption lines. They have masses from 1.4 to 2.1 times the mass of the Sun and surface temperatures between 7600 and 11 500  K. Biology White animals use their color as a form of camouflage in winter. Animals such as penguins are countershaded with white bellies, again as camouflage. Religion and culture White is an important symbolic color in most religions and cultures, usually because of its association with purity. In the Roman Catholic Church, white is associated with Jesus Christ, innocence and sacrifice. Since the Middle Ages, priests wear a white cassock in many of the most important ceremonies and religious services connected with events in the life of Christ. White is worn by priests at Christmas, during Easter, and during celebrations connected with the other events of the life of Christ, such as Corpus Christi Sunday, and Trinity Sunday. It is also worn at the services dedicated to the Virgin Mary, and to those Saints who were not martyred, as well as other special occasions, such as the ordination of priests and the installation of new bishops. Within the hierarchy of the church, lighter colors indicated higher rank; ordinary priests wore black; bishops wore violet, cardinals wore red, and outside a church, only the pope would wear white. (Popes occasionally wore white in the Middle Ages, but usually wore red. Popes have worn white regularly since 1566, when Pope Pius V, a member of the Dominican Order, began the practice.) White is the color of the Dominican Order. In the Church of Jesus Christ of Latter-day Saints the color white is used as a symbol of purity, innocence, and cleanliness, particularly in religious ceremonies such as baptism and temple ceremonies. In temple ceremonies, white clothing is also worn by all participants, both men and women, to also symbolize unity and equality before God. In Islam, white clothing is worn during required pilgrimage to Mecca, or Ihram pilgrimage (Hajj) Hajj. Called Ihram clothing, men's garments often consist of two white un-hemmed sheets (usually towelling material). The top (the riḍā) is draped over the torso and the bottom (the izār) is secured by a belt; plus a pair of sandals. Women's clothing varies considerably and reflects regional as well as religious influences. Ihram is typically worn during Dhu al-Hijjah, the last month in the Islamic calendar. White also has a long history of use as a religious and political symbol in Islam, beginning with the white banner that tradition ascribes to the Quraysh, the tribe to which Muhammad belonged. The Umayyad dynasty also used white as its dynastic color, following the personal banner of its founder, Mu'awiya I, while the Shi'ite Fatimids also chose white to highlight their opposition to the Sunni Abbasid Caliphate, whose color was black. In Judaism, during the rituals of Yom Kippur, the ceremony of atonement, the rabbi dresses in white, as do the members of the congregation, to restore the bonds between God and his followers. In the traditional Japanese religion of Shinto, an area of white gravel or stones marks a sacred place, called a niwa. These places were dedicated to the kami, spirits which had descended from the heavens or had come across the sea. Later, temples of Zen Buddhism in Japan often featured a Zen garden, where white sand or gravel was carefully raked to resemble rivers or streams, designed as objects of meditation. Many religions symbolize heaven by using a sky with white clouds. This phenomenon is not limited to western culture; in Yoruba religion, the orisha Obatala in the Ifá tradition is represented by white. Obatala is associated with calmness, morality, old age, and purity. In Theosophy and similar religions, the deities called the Great White Brotherhood are said to have white auras. In some Asian and Slavic cultures, white is considered to be a color that represents death. White also represented death in ancient Egypt, representing the lifeless desert that covered much of the country; black was held to be the color of life, representing the mud-covered fertile lands created by the flooding of the Nile and giving the country its name (Kemet, or "black land"). In China, Korea, and some other Asian countries, white, or more precisely, the whitish color of undyed linen, is the color of mourning and funerals. In traditional China, undyed linen clothing is worn at funerals. As time passes, the bereaved can gradually wear clothing dyed with colors, then with darker colors. Small sacks of quicklime, one for each year of the life of the deceased are placed around the body to protect it against impurity in the next world, and white paper flowers are placed around the body. In China and other Asian countries, white is the color of reincarnation, showing that death is not a permanent separation from the world. In China, white is associated with the masculine (the yang of the yin and yang); with the unicorn and tiger; with the fur of an animal; with the direction of west; with the element metal; and with the autumn season. In Japan, undyed linen white robes are worn by pilgrims for rituals of purification, and bathing in sacred rivers. In the mountains, pilgrims wear costumes of undyed jute to symbolize purity. A white kimono is often placed in the casket with the deceased for the journey to the other world, as white represents death sometimes. Condolence gifts, or kooden, are tied with black and white ribbons and wrapped in white paper, protecting the contents from the impurities of the other world. In India, it is the color of purity, divinity, detachment and serenity. In Hindi, the name Sweta means white. In Tibetan Buddhism, white robes were reserved for the lama of a monastery. In the Bedouin and some other pastoral cultures, there is a strong connection between milk and white, which is considered the color of gratitude, esteem, joy, good fortune and fertility. In Paganism, it is used for peace, innocence, illumination, and purity. It can also be used to stand for any color. White is also associated with cleansing, a Pagan practice that cleans something using the elements. In Wicca, a white-handled knife called the boline is used in rituals. Political movements White is often associated with monarchism. The association originally came from the white flag of the Bourbon dynasty of France. White became the banner of the royalist rebellions against the French Revolution (see Revolt in the Vendée). During the Civil War which followed the Russian Revolution of 1917, the White Army, a coalition of monarchists, nationalists and liberals, fought unsuccessfully against the Red Army of the Bolsheviks. A similar battle between reds and whites took place during the Civil War in Finland in the same period. The Ku Klux Klan is a racist and anti-immigrant organization which flourished in the Southern United States after the American Civil War. They wore white robes and hoods, burned crosses and violently attacked and murdered black Americans. In Iran, the White Revolution was a series of social and political reforms launched in 1963 by the last Shah of Iran before his downfall. White is also associated with peace and passive resistance. The white ribbon is worn by movements denouncing violence against women and the White Rose was a non-violent resistance group in Nazi Germany. Selected national flags featuring white White is a common color in national flags, though its symbolism varies widely. The white in the flag of the United States and flag of the United Kingdom comes from traditional red St George's Cross on a white background of the historic flag of England. The white in the flag of France represents either the monarchy or "white, the ancient French color" according to the Marquis de Lafayette. Many flags in the Arab world use the colors of the flag of the Arab Revolt of 1916; red, white, green and black. These include the flags of Egypt, Palestine, Jordan, Syria, Kuwait and Iraq. The Philippines also use white as their symbol for unity in their flag. Idioms and expressions To whitewash something is to conceal an unpleasant reality. A white lie is an innocent lie told out of politeness. White noise is the noise of all the frequencies of sound combined. It is used to cover up unwanted noise. A white knight in finance is a friendly investor who steps in to rescue a company from a hostile takeover. White-collar workers are those who work in offices, as opposed to blue-collar workers, who work with their hands in factories or workshops. A white paper is an authoritative report on a major issue by a team of experts; a government report outlining policy; or a short treatise whose purpose is to educate industry customers. Associating a paper with white may signify clean facts and unbiased information. The white feather is a symbol of cowardice, particularly in Britain. It supposedly comes from cockfighting and the belief that a cockerel sporting a white feather in its tail is likely to be a poor fighter. At the beginning of the First World War, women in England were encouraged to give white feathers to men who had not enlisted in the British Armed Forces. In the US, a white shoe firm is an older, conservative firm, usually in a field such as banking or law. The phrase derives from the "white bucks", laced suede or buckskin shoes with red soles, long popular in the Ivy League colleges. In Russia, the nobility are sometimes described as white bone (белая кость, bélaya kost'), commoners as black bone. Associations and symbolism Innocence and sacrifice In Western culture, white is the color most often associated with innocence, or purity. In the Bible and in Temple Judaism, white animals such as lambs were sacrificed to expiate sins. The white lily is considered the flower of purity and innocence, and is often associated with the Virgin Mary. Beginnings White is the color in Western culture most often associated with beginnings. In Christianity, children are baptized and first take communion wearing white. Christ after the Resurrection is traditionally portrayed dressed in white. Queen Elizabeth II wore white when she opened each session of British Parliament. In high society, debutantes traditionally wear white for their first ball. Weddings White has long been the traditional color worn by brides at royal weddings, but the white wedding gown for ordinary people appeared in the 19th century. Before that time, most brides wore their best Sunday clothing, of whatever color. The white lace wedding gown of Queen Victoria in 1840 had a large impact on the color and fashion of wedding dresses in both Europe and America down to the present day. Cleanliness White is the color most associated with cleanliness. Objects which are expected to be clean, such as refrigerators and dishes, toilets and sinks, bed linen and towels, are traditionally white. White was the traditional color of the coats of doctors, nurses, scientists and laboratory technicians, though nowadays a pale blue or green is often used. White is also the color most often worn by chefs, bakers, and butchers, and the color of the aprons of waiters in French restaurants. Ghosts, phantoms and two of the Four Horsemen of the Apocalypse White is the color associated with ghosts and phantoms. In the past the dead were traditionally buried in a white shroud. Ghosts are said to be the spirits of the dead who, for various reasons, are unable to rest or enter heaven, and so walk the earth in their white shrouds. White is also connected with the paleness of death. A common expression in English is "pale as a ghost." The White Lady, Weiße Frau, or dame blanche is a familiar figure in English, German and French ghost stories. She is a spectral apparition of a female clad in white, in most cases the ghost of an ancestor, sometimes giving warning about death and disaster. The most notable Weiße Frau is the legendary ghost of the German Hohenzollern dynasty. Seeing a white horse in a dream is said to be presentiment of death. In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgement. The man on a white horse with a bow and arrow, according to different interpretations, represents either War and Conquest, the Antichrist, or Christ himself, cleansing the world of sin. Death rides a horse whose color is described in ancient Greek as khlōros (χλωρός) in the original Koine Greek, which can mean either green/greenish-yellow or pale/pallid. Opposite of black Black and white often represent the contrast between light and darkness, day and night, male and female, good and evil. In taoism, the two complementary natures of the universe, yin and yang, are often symbolized in black and white, Ancient games of strategy, such as go and chess, use black and white to represent the two sides. In the French monarchy, white symbolized the King and his power par la grâce de Dieu ("by the grace of God") and in contrast black was the color of the queen who according to the Salic Law which excluded women from the throne (and thus from power) could never become the ruling monarch. Black and white also often represent formality and seriousness, as in the costumes of judges and priests, business suits, of formal evening dress. Monks of the Dominican Order wear a black cloak over a white habit. Until 1972, agents of the Federal Bureau of Investigation were informally required by FBI Director J. Edgar Hoover to wear white shirts with their suits, to project the correct image of the FBI. Names taken from white White is the source of more names for women in western countries than any other color. Names taken from white include Alba, Albine (Latin). Blandine, Blanche and Blanchette (French); Bianca (Italian); Jennifer (Celt); Genevieve, Candice (from Latin Candida); Fenela, Fiona and Finola (Irish); Gwendoline, Gwenael, Nol(g)wen (white woman) (Celt), Nives (Spanish) and Zuria (Basque). In addition many names come from white flowers: Camille, Daisy, Lily, Lili, Magnolie, Jasmine, Yasemine, Leila, Marguerite, Rosalba, and others. Other names come from the white pearl; Pearl, Margarita (Latin), Margaret, Margarethe, Marga, Grete, Rita, Gitta, Marjorie, Margot. Temples, churches and government buildings Since ancient times, temples, churches, and many government buildings in many countries have traditionally been white, the color associated with religious and civic virtue. The Parthenon and other ancient temples of Greece, and the buildings of the Roman Forum were mostly made of or clad in white marble, though it is now known that some of these ancient buildings were actually brightly painted. The Roman tradition of using white stone for government buildings and churches was revived in the Renaissance and especially in the neoclassic style of the 18th and 19th centuries. White stone became the material of choice for government buildings in Washington, D.C., and other American cities. European cathedrals were also usually built of white or light-colored stone, though many darkened over the centuries from smoke and soot. The Renaissance architect and scholar Leon Battista Alberti wrote in 1452 that churches should be plastered white on the inside, since white was the only appropriate color for reflection and meditation. Traditional Cistercian architecture also places a high emphasis on white for similar reasons. After the Reformation, Calvinist churches in the Netherlands were whitewashed and sober inside, a tradition that was also followed in the Protestant churches of New England, such as Old North Church in Boston. Ethnography People of the Caucasian race are often referred to simply as white. The United States Census Bureau defines white people as those "having origins in any of the original peoples of Europe, the Middle East, or North Africa. It includes people who reported "white" or wrote in entries such as Irish, German, Italian, Lebanese, Near Easterner, Arab, or Polish." White people constitute the majority of the U.S. population, with a total of 204,277,273 or 61.6% of the population in the 2020 United States Census. White flag A white flag has long been used to represent either surrender or a request for a truce. It is believed to have originated in the 15th century, during the Hundred Years' War between France and England, when multicolored flags, as well as firearms, came into common use by European armies. The white flag was officially recognized as a request to cease hostilities by the Geneva Convention of 1949. Vexillology and heraldry In English heraldry, white or silver signified brightness, purity, virtue, and innocence.
Physical sciences
Color terms
null
1052910
https://en.wikipedia.org/wiki/Miniaturization
Miniaturization
Miniaturization (Br.Eng.: miniaturisation) is the trend to manufacture ever-smaller mechanical, optical, and electronic products and devices. Examples include miniaturization of mobile phones, computers and vehicle engine downsizing. In electronics, the exponential scaling and miniaturization of silicon MOSFETs (MOS transistors) leads to the number of transistors on an integrated circuit chip doubling every two years, an observation known as Moore's law. This leads to MOS integrated circuits such as microprocessors and memory chips being built with increasing transistor density, faster performance, and lower power consumption, enabling the miniaturization of electronic devices. Electronic circuits The history of miniaturization is associated with the history of information technology based on the succession of switching devices, each smaller, faster, and cheaper than its predecessor. During the period referred to as the Second Industrial Revolution (), miniaturization was confined to two-dimensional electronic circuits used for the manipulation of information. This orientation is demonstrated in the use of vacuum tubes in the first general-purpose computers. The technology gave way to the development of transistors in the 1950s and then the integrated circuit (IC) approach which followed. The MOSFET was invented at Bell Labs between 1955 and 1960. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses, due to its high scalability and low power consumption, leading to increasing transistor density. This made it possible to build high-density IC chips, with reduced cost-per-transistor as transistor density increased. In the early 1960s, Gordon Moore, who later founded Intel, recognized that the ideal electrical and scaling characteristics of MOSFET devices would lead to rapidly increasing integration levels and unparalleled growth in electronic applications. Moore's law, which he described in 1965, and which was later named after him, predicted that the number of transistors on an IC for minimum component cost would double every 18 months. In 1974, Robert H. Dennard at IBM recognized the rapid MOSFET scaling technology and formulated the related Dennard scaling rule. Moore described the development of miniaturization during the 1975 International Electron Devices Meeting, confirming his earlier predictions. By 2004, electronics companies were producing silicon IC chips with switching MOSFETs that had feature size as small as 130 nanometers (nm) and development was also underway for chips a few nanometers in size through the nanotechnology initiative. The focus is to make components smaller to increase the number that can be integrated into a single wafer and this required critical innovations, which include increasing wafer size, the development of sophisticated metal connections between the chip's circuits, and improvement in the polymers used for masks (photoresists) in the photolithography processes. These last two are the areas where miniaturization has moved into the nanometer range. Other fields Miniaturization became a trend in the last fifty years and came to cover not just electronic but also mechanical devices. The process for miniaturizing mechanical devices is more complex due to the way the structural properties of mechanical parts change as they are reduced in scale. It has been said that the so-called Third Industrial Revolution (1969 – c. 2015) is based on economically viable technologies that can shrink three-dimensional objects. In medical technology, engineers and designers have been exploring miniaturization to shrink components to the micro and nanometer range. Smaller devices can have lower cost, be made more portable (e.g.: for ambulances), and allow simpler and less invasive medical procedures.
Technology
General
null
1053424
https://en.wikipedia.org/wiki/African%20palm%20civet
African palm civet
The African palm civet (Nandinia binotata), also known as the two-spotted palm civet, is a small feliform mammal widely distributed in sub-Saharan Africa. It is listed as least concern on the IUCN Red List. Characteristics The African palm civet is grey to dark brown with dark spots on the back. It has short legs, small ears, a lean body, and a long, ringed tail. It has two sets of scent glands on the lower abdomen and between the third and fourth toes on each foot, which secrete a strong-smelling substance used to mark territory and in mating. Adult females reach a body length of with a long tail and weigh . Adult males reach in body length with a long tail and weigh . The African palm civet's ear canal is not divided and cartilaginous at the end. Distribution and habitat The African palm civet ranges throughout much of sub-Saharan Africa from Guinea to South Sudan, south to Angola, and into eastern Zimbabwe. It has been recorded in deciduous forests, lowland rainforests, gallery and riverine forests, savanna woodlands, and logged forests up to an elevation of . In the 1950s, one individual was wild-caught on Bioko Island. However, it was not recorded on the island during subsequent surveys between 1986 and 2015. In Guinea's National Park of Upper Niger, it was recorded during surveys conducted in 1996 to 1997. In Senegal, it was observed in 2000 in Niokolo-Koba National Park, which encompasses mainly open habitat dominated by grasses. In Gabon's Moukalaba-Doudou National Park, it was recorded in forested areas during a camera-trapping survey in 2012. In Batéké Plateau National Park, it was recorded only west of the Mpassa River during surveys carried out between June 2014 and May 2015. In Liberian Upper Guinean forests, it was sighted in Gbarpolu County and Bong County during surveys in 2013. In Zanzibar, it was recorded in groundwater forest on Unguja Island in 2003. Behaviour and ecology The African palm civet is a nocturnal, largely arboreal mammal that spends most of the time on large branches, among lianas in the canopy of trees. It eats fruits such as those of the African corkwood tree (Musanga cecropioides), Uapaca, persimmon (Diospyros hoyleana), fig trees (Ficus), papayas (Carica papaya), and bananas (Musa). Males have home ranges of and females of . The home range of a dominant male includes home ranges of several females. Reproduction In Gabon, females were recorded to give birth in the long wet season and at the onset of the dry season between September and January. The female usually gives birth after a gestation period of 2–3 months. A litter consists of up to four young that are suckled for around three months. While she has suckling young, the female's mammary glands produce an orange-yellow liquid, which discolours her abdomen and the young civets' fur. This probably discourages males from mating with nursing females. Its generation length is 7.8 years. Taxonomy and evolution In 1830, John Edward Gray first described an African palm civet using the name Viverra binotata based on a zoological specimen obtained from a museum in Leiden. In 1843, Gray proposed the genus Nandinia and subordinated Viverra binotata to this genus. In 1929, Reginald Innes Pocock proposed the family Nandiniidae, with the genus Nandinia as sole member. He argued that it differs from the Aeluroidea by the structure and shape of its ear canal and mastoid part of the temporal bone. Results of morphological and molecular genetic analyses indicate that it differs from viverrids and diverged from the Feliformia about , It is the most genetically isolated Carnivoran, being the only species within its superfamily as a whole. Phylogenetic tree The phylogenetic relationships of African palm civet is shown in the following cladogram: Threats The African palm civet is threatened by habitat loss and hunting for bushmeat. In 2006, an estimated more than 4,300 African palm civets are hunted yearly in the Nigerian part and around 3,300 in the Cameroon part of the Cross–Sanaga–Bioko coastal forests. In Guinea, dead African palm civets were recorded in spring 1997 on bushmeat market in villages located in the vicinity of the National Park of Upper Niger. Dried heads of African palm civets were found in 2007 at the Bohicon and Dantokpa Markets in southern Benin, suggesting that they are used as fetish in animal rituals. The attitude of rural people in Ghana towards African palm civets is hostile; they consider them a menace to their food resources and safety of children. In Gabon, it is among the most frequently found small carnivores for sale in bushmeat markets. Upper Guinean forests in Liberia are considered a biodiversity hotspot. They have already been fragmented into two blocks. Large tracts are threatened by commercial logging and mining activities, and are converted for agricultural use including large-scale oil palm plantations in concessions obtained by a foreign company.
Biology and health sciences
Other carnivora
Animals
1054508
https://en.wikipedia.org/wiki/M134%20Minigun
M134 Minigun
The M134 Minigun is an American 7.62×51mm NATO six-barrel rotary machine gun with a high rate of fire (2,000 to 6,000 rounds per minute). It features a Gatling-style rotating barrel assembly with an external power source, normally an electric motor. The "Mini" in the name is in comparison to larger-caliber designs that use a rotary barrel design, such as General Electric's earlier 20 mm M61 Vulcan, and "gun" for the use of rifle ammunition as opposed to autocannon shells. "Minigun" refers to a specific model of weapon that General Electric originally produced, but the term "minigun" has popularly come to refer to any externally powered rotary gun of rifle caliber. The term is sometimes used loosely to refer to guns of similar rates of fire and configuration, regardless of power source and caliber. The Minigun is used by several branches of the U.S. military. Versions are designated M134 and XM196 by the United States Army, and GAU-2/A and GAU-17/A by the U.S. Air Force and U.S. Navy. History Background: electrically driven Gatling gun The ancestor to the modern minigun was a hand cranked mechanical device invented in the 1860s by Richard Jordan Gatling. He later replaced the hand-cranked mechanism of a rifle-caliber Gatling gun with an electric motor, a relatively new invention at the time. Even after Gatling slowed the mechanism, the new electrically powered Gatling gun had a theoretical rate of fire of 3,000 rounds per minute, roughly three times the rate of a typical modern, single-barreled machine gun. Gatling's design received U.S. Patent #502,185 on July 25, 1893. Despite his improvements, the Gatling gun fell into disuse after cheaper, lighter-weight, recoil and gas operated machine guns were invented; Gatling himself went bankrupt for a period. During World War I, several German companies were working on externally powered guns for use in aircraft. One of these designs was the Fokker-Leimberger, an externally powered 12-barrel rotary gun using the 7.92×57mm Mauser round; it was claimed to be capable of firing over 7,000 rpm, but suffered from frequent cartridge-case ruptures due to its "nutcracker" rotary split-breech design, which is different to that of conventional rotary gun designs. None of these German guns went into production during the war, although a competing Siemens prototype (possibly using a different action), which was tried on the Western Front, scored a victory in aerial combat. The British also experimented with this type of split-breech during the 1950s, but they were also unsuccessful. Minigun: 1960s–Vietnam In the 1960s, the United States Armed Forces began exploring modern variants of the electrically powered, rotating barrel Gatling-style weapons for use in the Vietnam War. American forces in the Vietnam War, which used helicopters as one of the primary means of transporting soldiers and equipment through the dense jungle, found that their helicopters were vulnerable to small arms fire and rocket-propelled grenade (RPG) attacks when they slowed to land. Although helicopters had mounted single-barrel machine guns, using them to repel attackers hidden in the dense jungle foliage often led to overheated barrels or cartridge jams. To develop a more reliable weapon with a higher rate of fire, General Electric designers scaled down the rotary-barrel 20 mm M61 Vulcan cannon for 7.62×51mm NATO ammunition. The resulting weapon, designated M134 and known as the "Minigun", could fire up to 6,000 rounds per minute without overheating. The gun has a variable (i.e. selectable) rate of fire, specified to fire at rates of up to 6,000 rpm with most applications set at rates between 3,000 and 4,000 rounds per minute. The Minigun was mounted on Hughes OH-6 Cayuse and Bell OH-58 Kiowa side pods; in the turret and on pylon pods of Bell AH-1 Cobra attack helicopters; and on door, pylon and pod mounts on Bell UH-1 Iroquois transport helicopters. Several larger aircraft were outfitted with miniguns specifically for close air support: the Cessna A-37 Dragonfly with an internal gun and with pods on wing hardpoints; and the Douglas A-1 Skyraider, also with pods on wing hardpoints. Other famous gunship airplanes are the Douglas AC-47 Spooky, the Fairchild AC-119, and the Lockheed AC-130. Dillon Aero minigun The U.S. government had procured some 10,000 Miniguns during the Vietnam War. Around 1990, Dillon Aero acquired a large number of Miniguns and spares from "a foreign user". The guns kept failing to shoot continuously, revealing that they were actually worn-out weapons. The company decided to fix the problems encountered, rather than simply putting the guns into storage. Fixing failure problems ended up improving the Minigun's overall design. Word of Dillon's efforts to improve the Minigun reached the 160th SOAR, and the company was invited to Fort Campbell, Kentucky, to demonstrate its products. A delinker, used to separate cartridges from ammunition belts and feed them into the gun housing, and other parts were tested on Campbell's ranges. The 160th SOAR were impressed by the delinker's performance and began ordering them by 1997. This prompted Dillon to improve other design aspects including the bolt, housing and barrel. Between 1997 and 2001, Dillon Aero was producing 25–30 products a year. In 2001, it was working on a new bolt design that increased performance and service life. By 2002, virtually every component of the minigun had been improved, so Dillon began producing complete weapons with improved components. The guns were purchased quickly by the 160th SOAR as its standardized weapon system. The gun then went through the Army's formal procurement system approval process, and in 2003 the Dillon Aero minigun was certified and designated M134D. Once the Dillon Aero system was approved for general military service, Dillon Aero GAU-17s entered Marine Corps service and were well received in replacing the GE GAU-17s serving on Marine UH-1s. The core of the M134D was a steel housing and rotor. To focus on weight reduction, a titanium housing and rotor were introduced, creating the M134D-T which had reduced weight from to . The gun housing had a 500,000-round lifespan before it wore out, which was far higher than a conventional machine gun's 40,000-round lifespan but lower than that of other rotary guns. A hybrid of the two weapons resulted in the M134D-H, which had a steel housing and titanium rotor. It was cheaper with the steel component and only heavier than the M134D-T, and restored its lifespan to 1.5 million rounds. The M134D-H is currently in use on various 160th Regiment platforms. Dillon also created specialized mounts and ammunition-handling systems. Initially, mounts were made only for aviation systems. Then from 2003 to 2005, the Navy began mounting Dillon miniguns on specialized small boats. In 2005, the Naval Surface Warfare Center Crane Division procured guns to mount on Humvees. In Iraq, US Army Special Forces units on the ground were frequently engaged by opposition forces, so they mounted M134D miniguns on their vehicles for additional firepower. After several engagements the attackers seemed to avoid vehicles with miniguns. Later, the Special Forces units began concealing their weapons so opposition troops would not know they were facing the weapon; the regular Army units did the opposite, creating minigun mock-ups out of painted PVC pipes tied together to resemble barrels to intimidate enemies. Garwood Industries minigun Garwood Industries created the M134G version with several modifications to the original GE system. The optimum rate of fire was determined by Garwood to be around 3,200 rounds per minute (rpm). The M134G is being produced with this firing rate as well as 4,000 rpm and the previous standard 3,000 rpm rate. Garwood Industries made several other modifications to the 1960s Minigun design in order to meet modern-day military and ISO standards. This includes modifications to the drive motor, feeder and barrel clutch assembly. From 2015 to 2017 Garwood Industries CEO Tracy Garwood collaborated with firearms dealer Michael Fox and weapons smuggler Tyler Carlson to supply miniguns to Mexican drug cartels. Garwood submitted false paperwork to the ATF claiming that some M134G rotor housings had been destroyed when they were actually sold to the gun-running ring. In 2017 federal agents raided Fox's home and recovered two of the rotor housings that Garwood had reported destroyed. A number of the rotor housings were shipped to Mexico and a completed M134G using a reportedly destroyed rotor housing was recovered from a cartel by Mexican law enforcement. Garwood claimed he did not know that the intended buyers were Mexican cartels although he was aware that they were to be used for illegal activity. Design and variants The basic minigun is a six-barrel, air-cooled, and electrically driven rotary machine gun. The electric drive rotates the weapon within its housing, with a rotating firing pin assembly and rotary chamber. The minigun's multi-barrel design helps prevent overheating, but also serves other functions. Multiple barrels allow for a greater capacity for a high firing rate, since the serial process of firing, extraction, and loading is taking place in all barrels simultaneously. Thus, as one barrel fires, two others are in different stages of shell extraction and another three are being loaded. The minigun is composed of multiple closed-bolt rifle barrels arranged in a circular housing. The barrels are rotated by an external power source, usually electric, pneumatic, or hydraulic. Other rotating-barrel cannons are powered by the gas pressure or recoil energy of fired cartridges. A gas-operated variant, designated XM133, was also developed. While the weapon can feed from linked ammunition, it requires a delinking feeder to strip the links as the rounds are fed into the chambers. The original feeder unit was designated MAU-56/A, but has since been replaced by an improved MAU-201/A unit. The General Electric minigun is used in several branches of the U.S. military, under a number of designations. The basic fixed armament version was given the designation M134 by the United States Army, while the same weapon was designated GAU-2/A (on a fixed mount) and GAU-17/A (flexible mount) by the United States Air Force (USAF) and United States Navy (USN). The USAF minigun variant has three versions, while the US Army weapon appears to have incorporated several improvements without a change in designation. The M134D is an improved version of the M134 designed and manufactured by Dillon Aero, while Garwood Industries manufactures the M134G variant. Available sources show a relation between both M134 and GAU-2/A and M134 and GAU-2B/A. A separate variant, designated XM196, with an added ejection sprocket was developed specifically for the XM53 Armament Subsystem on the Lockheed AH-56 Cheyenne helicopter. Another variant was developed by the USAF specifically for flexible installations, beginning primarily with the Bell UH-1N Twin Huey helicopter, as the GAU-17/A. Produced by General Dynamics, this version has a slotted flash hider. The primary end users of the GAU-17/A have been the USN and the United States Marine Corps (USMC), which mount the gun as defensive armament on a number of helicopters and surface ships. GAU-17/As from helicopters were rushed into service for ships on pintle mountings taken from Mk16 20 mm guns for anti-swarm protection in the Gulf ahead of the 2003 Iraq War - 59 systems were installed in 30 days. The GAU-17/A is designated Mk 44 in the machine gun series and is generally known as the Mk 44 when installed on British warships. The weapon is part of both the A/A49E-11 armament system on the UH-1N; and of the A/A49E-13 armament subsystem on the USAF Sikorsky HH-60H Pave Hawk helicopter. The weapons on these systems feature a selectable fire rate of either 2,000 or 4,000 rpm. There is mention of a possible GAUSE-17 designation (GAU-Shipboard Equipment-17), in reference to the system when mounted on surface ships, though this would not follow the official ASETDS designation system's format. Gun pods and other mounting systems One of the first applications of the weapon was in aircraft armament pods. These gun pods were used by a wide variety of fixed- and rotary-wing aircraft mainly during the Vietnam War, remaining in inventory for a period afterward. The standard pod, designated SUU-11/A by the Air Force and M18 by the U.S. Army, was a relatively simple unit, completely self-contained, with a 1,500-round magazine directly feeding delinked ammunition into the weapon. This means the Minigun fitted to the pod does not require the standard MAU-56/A delinking feeder unit. A number of variants of this pod exist. Initially on fixed-wing gunships such as the Douglas AC-47 Spooky and Fairchild AC-119, the side-firing armament was fitted by combining SUU-11/A aircraft pods, often with their aerodynamic front fairings removed, with a locally fabricated mount. These pods were essentially unmodified, required no external power, and were linked to the aircraft's fire controls. The need for those pods for other missions led to the development and fielding of a purpose-built "Minigun module" for gunship use, designated the MXU-470/A. These units first arrived in January 1967 with features such as an improved 2,000-round drum and electric feeder allowing simplified reloading in flight. The initial units were unreliable and were withdrawn almost immediately. By the end of the year, the difficulties had been worked out and the units were again being fitted to AC-47s, AC-119s, and AC-130s, with a specific ammunition load that replaced every fifth 'ball' round with a tracer round to enable better accuracy by the gunners, and also earning these airborne gunships the nickname 'Puff the Magic Dragon' by the Viet Cong due to their apparent ability of spitting fire and making everything they hit disappear or die. The AC-47 had three side mounted MXU-470/As (four were mounted on its replacement, the AC-119) and when all firing at once created a devastating image in the eyes of the enemy. The first AC-130A Gunship IIs did away with the MXU-470/A mounts and instead used GAU-2/As, and not only had four 7.62mm GAU-2/A minigun mounts, but added four 20mm M61 Vulcan 6-barrel rotary cannons; this configuration was upgraded two years later in 1969 by removing two each of the GAU-2/As and M61s and adding two 40mm (1.58 in) L/60 Bofors cannons in the aptly named AC-130A 'Surprise Package'. This configuration lasted two more years until, in late 1971, the AC-130E Pave Aegis arrived, which did away with the miniguns altogether and one of the 40mm Bofors and instead went to the configuration of two 20mm M61 Vulcan, one 40mm L/60 Bofors and one 105 mm (4.13 in) M102 howitzer, a configuration that lasted until the early 2000s when the AC-130Hs (the AC-130Es had had an avionics upgrade and redesignated to H models) underwent a refit and the two M61 Vulcans were removed and replaced with one General Dynamics 25 mm (0.984 in) GAU-12/U Equalizer 5-barrel rotary cannon (while still retaining the H suffix). The improved MXU-470/As were even being proposed for lighter aircraft such as the Cessna O-2 Skymaster used by Forward Air Controllers but proved too heavy and cumbersome. A fit of two MXU-470/As was also tested on the Fairchild AU-23A Peacemaker, though the Royal Thai Air Force later elected to use another configuration with the M197 20 mm cannon. In September 2013, Dillon Aero released the DGP2300 gun pod for the M134D-H. It contains 3,000 rounds, enough ammunition to fire the minigun for a full minute. The system is entirely self-contained, so it can be mounted on any aircraft that can handle the weight, rotational torque, and recoil force () of the gun. The pod has its own battery which can be wired into the aircraft's electrical system to maintain a charge. Various iterations of the minigun have also been used in a number of armament subsystems for helicopters, with most of these subsystems being created by the United States. The first systems utilized the weapon in a forward firing role for a variety of helicopters, some of the most prominent examples being the M21 armament subsystem for the UH-1 and the M27 for the OH-6. It also formed the primary turret-mounted armament for a number of members of the Bell AH-1 Cobra family. The weapon was also used as a pintle-mounted door gun on a wide variety of transport helicopters, a role it continues to fulfill today. Users – to be mounted on CH-47 Chinook helicopters – Used on UH-60L, Mi-17, and UH-1N helicopters. (testing)
Technology
Specific firearms
null
1055255
https://en.wikipedia.org/wiki/Altazimuth%20mount
Altazimuth mount
An altazimuth mount or alt-azimuth mount is a simple two-axis mount for supporting and rotating an instrument about two perpendicular axes – one vertical and the other horizontal. Rotation about the vertical axis varies the azimuth (compass bearing) of the pointing direction of the instrument. Rotation about the horizontal axis varies the altitude angle (angle of elevation) of the pointing direction. These mounts are used, for example, with telescopes, cameras, radio antennas, heliostat mirrors, solar panels, and guns and similar weapons. Several names are given to this kind of mount, including altitude-azimuth, azimuth-elevation and various abbreviations thereof. A gun turret is essentially an alt-azimuth mount for a gun, and a standard camera tripod is an alt-azimuth mount as well. Astronomical telescope altazimuth mounts When used as an astronomical telescope mount, the biggest advantage of an alt-azimuth mount is the simplicity of its mechanical design. The primary disadvantage is its inability to follow astronomical objects in the night sky as the Earth spins on its axis. On the other hand, an equatorial mount only needs to be rotated about a single axis, at a constant rate, to follow the rotation of the night sky (diurnal motion). Altazimuth mounts need to be rotated about both axes at variable rates, achieved via microprocessor based two-axis drive systems, to track equatorial motion. This imparts an uneven rotation to the field of view that also has to be corrected via a microprocessor based counter rotation system. On smaller telescopes an equatorial platform is sometimes used to add a third "polar axis" to overcome these problems, providing an hour or more of motion in the direction of right ascension to allow for astronomical tracking. The design also does not allow for the use of mechanical setting circles to locate astronomical objects although modern digital setting circles have removed this shortcoming. Another limitation is the problem of gimbal lock at zenith pointing. When tracking at elevations close to 90°, the azimuth axis must rotate very quickly; if the altitude is exactly 90°, the speed is infinite. Thus, altazimuth telescopes, although they can point in any direction, cannot track smoothly within a "zenith blind spot", commonly 0.5 or 0.75 degrees from the zenith. (i.e. at elevations greater than 89.5° or 89.25° respectively.) Current applications Typical current applications of altazimuth mounts include the following. Research telescopes In the largest telescopes, the mass and cost of an equatorial mount is prohibitive and they have been superseded by computer-controlled altazimuth mounts. The simple structure of an altazimuth mount allows significant cost reductions, in spite of the additional cost associated with the more complex tracking and image-orienting mechanisms. An altazimuth mount also reduces the cost in the dome structure covering the telescope since the simplified motion of the telescope means the structure can be more compact. Amateur telescopes Beginner telescopes: Altazimuth mounts are cheap and simple to use. Dobsonian telescopes: John Dobson popularized a simplified altazimuth mount design for Newtonian reflectors because of its ease of construction; Dobson's innovation was to use non-machined parts for the mount that could be found in any hardware store such as plywood, formica, and plastic plumbing parts combined with modern materials like nylon or teflon. "GoTo" telescopes: It has often proved more convenient to build a mechanically simpler altazimuth mount and use a motion controller to manipulate both axes simultaneously to track an object, when compared with a more mechanically complex equatorial mount that requires minimally complex control of a single motor. Gallery
Technology
Telescope
null
1055334
https://en.wikipedia.org/wiki/Equatorial%20mount
Equatorial mount
An equatorial mount is a mount for instruments that compensates for Earth's rotation by having one rotational axis, called polar axis, parallel to the Earth's axis of rotation. This type of mount is used for astronomical telescopes and cameras. The advantage of an equatorial mount lies in its ability to allow the instrument attached to it to stay fixed on any celestial object with diurnal motion by driving one axis at a constant speed. Such an arrangement is called a sidereal drive or clock drive. Equatorial mounts achieve this by aligning their rotational axis with the Earth, a process known as polar alignment. Astronomical telescope mounts In astronomical telescope mounts, the equatorial axis (the right ascension) is paired with a second perpendicular axis of motion (known as the declination). The equatorial axis of the mount is often equipped with a motorized "clock drive", that rotates that axis one revolution every 23 hours and 56 minutes in exact sync with the apparent diurnal motion of the sky. They may also be equipped with setting circles to allow for the location of objects by their celestial coordinates. Equatorial mounts differ from mechanically simpler altazimuth mounts, which require variable speed motion around both axes to track a fixed object in the sky. Also, for astrophotography, the image does not rotate in the focal plane, as occurs with altazimuth mounts when they are guided to track the target's motion, unless a rotating erector prism or other field-derotator is installed. Equatorial telescope mounts come in many designs. In the last twenty years motorized tracking has increasingly been supplemented with computerized object location. There are two main types. Digital setting circles take a small computer with an object database that is attached to encoders. The computer monitors the telescope's position in the sky. The operator must push the telescope. Go-to systems use (in most cases) a worm and ring gear system driven by servo or stepper motors, and the operator need not touch the instrument at all to change its position in the sky. The computers in these systems are typically either hand-held in a control "paddle" or supplied through an adjacent laptop computer which is also used to capture images from an electronic camera. The electronics of modern telescope systems often include a port for autoguiding. A special instrument tracks a star and makes adjustment in the telescope's position while photographing the sky. To do so the autoguider must be able to issue commands through the telescope's control system. These commands can compensate for very slight errors in the tracking performance, such as periodic error caused by the worm drive that makes the telescope move. In new observatory designs, equatorial mounts have been out of favor for decades in large-scale professional applications. Massive new instruments are most stable when mounted in an alt-azimuth (up down, side-to-side) configuration. Computerized tracking and field-derotation are not difficult to implement at the professional level. At the amateur level, however, equatorial mounts remain popular, particularly for astrophotography. German equatorial mount In the German equatorial mount, (sometimes called a "GEM" for short) the primary structure is a T-shape, where the lower bar is the right ascension axis (lower diagonal axis in image), and the upper bar is the declination axis (upper diagonal axis in image). The mount was developed by Joseph von Fraunhofer for the Great Dorpat Refractor that was finished in 1824. The telescope is placed on one end of the declination axis (top left in image), and a suitable counterweight on other end of it (bottom right). The right ascension axis has bearings below the T-joint, that is, it is not supported above the declination axis. Open fork mount The Open Fork mount has a Fork attached to a right ascension axis at its base. The telescope is attached to two pivot points at the other end of the fork so it can swing in declination. Most modern mass-produced catadioptric reflecting telescopes (200 mm or larger diameter) tend to be of this type. The mount resembles an Altazimuth mount, but with the azimuth axis tilted and lined up to match earth rotation axis with a piece of hardware usually called a "wedge". Many mid-size professional telescopes also have equatorial forks, these are usually in range of 0.5-2.0 meter diameter. English or Yoke mount The English mount or Yoke mount has a frame or "yoke" with right ascension axis bearings at the top and the bottom ends, and a telescope attached inside the midpoint of the yoke allowing it to swing on the declination axis. The telescope is usually fitted entirely inside the fork, although there are exceptions such as the Mount Wilson 2.5 m reflector, and there are no counterweights as with the German mount. The original English fork design is disadvantaged in that it does not allow the telescope to point too near the north or south celestial pole. Horseshoe mount The horseshoe mount overcomes the design disadvantage of English or Yoke mounts by replacing the polar bearing with an open "horseshoe" structure to allow the telescope to access Polaris and stars near it. The Hale Telescope is the most prominent example of a horseshoe mount in use. Cross-axis mount The Cross-axis or English cross axis mount is like a big "plus" sign (+). The right ascension axis is supported at both ends, and the declination axis is attached to it at approximately midpoint with the telescope on one end of the declination axis and a counter weight on the other. Equatorial platform An equatorial platform is a specially designed platform that allows any device sitting on it to track on an equatorial axis. It achieves this by having a surface that pivots about a "virtual polar axis". This gives equatorial tracking to anything sitting on the platform, from small cameras up to entire observatory buildings. These platforms are often used with altazimuth mounted amateur astronomical telescopes, such as the common Dobsonian telescope type, to overcome that type of mount's inability to track the night sky.
Technology
Telescope
null
1055890
https://en.wikipedia.org/wiki/Sustainable%20energy
Sustainable energy
Energy is sustainable if it "meets the needs of the present without compromising the ability of future generations to meet their own needs." Definitions of sustainable energy usually look at its effects on the environment, the economy, and society. These impacts range from greenhouse gas emissions and air pollution to energy poverty and toxic waste. Renewable energy sources such as wind, hydro, solar, and geothermal energy can cause environmental damage but are generally far more sustainable than fossil fuel sources. The role of non-renewable energy sources in sustainable energy is controversial. Nuclear power does not produce carbon pollution or air pollution, but has drawbacks that include radioactive waste, the risk of nuclear proliferation, and the risk of accidents. Switching from coal to natural gas has environmental benefits, including a lower climate impact, but may lead to a delay in switching to more sustainable options. Carbon capture and storage can be built into power plants to remove their carbon dioxide () emissions, but this technology is expensive and has rarely been implemented. Fossil fuels provide 85% of the world's energy consumption, and the energy system is responsible for 76% of global greenhouse gas emissions. Around 790 million people in developing countries lack access to electricity, and 2.6 billion rely on polluting fuels such as wood or charcoal to cook. Cooking with biomass plus fossil fuel pollution causes an estimated 7 million deaths each year. Limiting global warming to will require transforming energy production, distribution, storage, and consumption. Universal access to clean electricity can have major benefits to the climate, human health, and the economies of developing countries. Climate change mitigation pathways have been proposed to limit global warming to . These include phasing out coal-fired power plants, conserving energy, producing more electricity from clean sources such as wind and solar, and switching from fossil fuels to electricity for transport and heating buildings. Power output from some renewable energy sources varies depending on when the wind blows and the sun shines. Switching to renewable energy can therefore require electrical grid upgrades, such as the addition of energy storage. Some processes that are difficult to electrify can use hydrogen fuel produced from low-emission energy sources. In the International Energy Agency's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023. Wind and solar market share grew to 8.5% of worldwide electricity in 2019, and costs continue to fall. The Intergovernmental Panel on Climate Change (IPCC) estimates that 2.5% of world gross domestic product (GDP) would need to be invested in the energy system each year between 2016 and 2035 to limit global warming to . Governments can fund the research, development, and demonstration of new clean energy technologies. They can also build infrastructure for electrification and sustainable transport. Finally, governments can encourage clean energy deployment with policies such as carbon pricing, renewable portfolio standards, and phase-outs of fossil fuel subsidies. These policies may also increase energy security. Definitions and background Definitions The United Nations Brundtland Commission described the concept of sustainable development, for which energy is a key component, in its 1987 report Our Common Future. It defined sustainable development as meeting "the needs of the present without compromising the ability of future generations to meet their own needs". This description of sustainable development has since been referenced in many definitions and explanations of sustainable energy. There is no universally accepted interpretation of how the concept of sustainability applies to energy on a global scale. Working definitions of sustainable energy encompass multiple dimensions of sustainability such as environmental, economic, and social dimensions. Historically, the concept of sustainable energy development has focused on emissions and on energy security. Since the early 1990s, the concept has broadened to encompass wider social and economic issues. The environmental dimension of sustainability includes greenhouse gas emissions, impacts on biodiversity and ecosystems, hazardous waste and toxic emissions, water consumption, and depletion of non-renewable resources. Energy sources with low environmental impact are sometimes called green energy or clean energy. The economic dimension of sustainability covers economic development, efficient use of energy, and energy security to ensure that each country has constant access to sufficient energy. Social issues include access to affordable and reliable energy for all people, workers' rights, and land rights. Environmental impacts The current energy system contributes to many environmental problems, including climate change, air pollution, biodiversity loss, the release of toxins into the environment, and water scarcity. As of 2019, 85% of the world's energy needs are met by burning fossil fuels. Energy production and consumption are responsible for 76% of annual human-caused greenhouse gas emissions as of 2018. The 2015 international Paris Agreement on climate change aims to limit global warming to well below and preferably to 1.5 °C (2.7 °F); achieving this goal will require that emissions be reduced as soon as possible and reach net-zero by mid-century. The burning of fossil fuels and biomass is a major source of air pollution, which causes an estimated 7 million deaths each year, with the greatest attributable disease burden seen in low and middle-income countries. Fossil-fuel burning in power plants, vehicles, and factories is the main source of emissions that combine with oxygen in the atmosphere to cause acid rain. Air pollution is the second-leading cause of death from non-infectious disease. An estimated 99% of the world's population lives with levels of air pollution that exceed the World Health Organization recommended limits. Cooking with polluting fuels such as wood, animal dung, coal, or kerosene is responsible for nearly all indoor air pollution, which causes an estimated 1.6 to 3.8 million deaths annually, and also contributes significantly to outdoor air pollution. Health effects are concentrated among women, who are likely to be responsible for cooking, and young children. Environmental impacts extend beyond the by-products of combustion. Oil spills at sea harm marine life and may cause fires which release toxic emissions. Around 10% of global water use goes to energy production, mainly for cooling in thermal energy plants. In dry regions, this contributes to water scarcity. Bioenergy production, coal mining and processing, and oil extraction also require large amounts of water. Excessive harvesting of wood and other combustible material for burning can cause serious local environmental damage, including desertification. Sustainable development goals Meeting existing and future energy demands in a sustainable way is a critical challenge for the global goal of limiting climate change while maintaining economic growth and enabling living standards to rise. Reliable and affordable energy, particularly electricity, is essential for health care, education, and economic development. As of 2020, 790 million people in developing countries do not have access to electricity, and around 2.6 billion rely on burning polluting fuels for cooking. Improving energy access in the least-developed countries and making energy cleaner are key to achieving most of the United Nations 2030 Sustainable Development Goals, which cover issues ranging from climate action to gender equality. Sustainable Development Goal 7 calls for "access to affordable, reliable, sustainable and modern energy for all", including universal access to electricity and to clean cooking facilities by 2030. Energy conservation Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals. Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings. Another approach is to use fewer materials whose production requires a lot of energy, for example through better building design and recycling. Behavioural changes such as using videoconferencing rather than business flights, or making urban trips by cycling, walking or public transport rather than by car, are another way to conserve energy. Government policies to improve efficiency can include building codes, performance standards, carbon pricing, and the development of energy-efficient infrastructure to encourage changes in transport modes. The energy intensity of the global economy (the amount of energy consumed per unit of gross domestic product (GDP)) is a rough indicator of the energy efficiency of economic production. In 2010, global energy intensity was 5.6 megajoules (1.6 kWh) per US dollar of GDP. United Nations goals call for energy intensity to decrease by 2.6% each year between 2010 and 2030. In recent years this target has not been met. For instance, between 2017 and 2018, energy intensity decreased by only 1.1%. Efficiency improvements often lead to a rebound effect in which consumers use the money they save to buy more energy-intensive goods and services. For example, recent technical efficiency improvements in transport and buildings have been largely offset by trends in consumer behaviour, such as selecting larger vehicles and homes. Sustainable energy sources Renewable energy sources Renewable energy sources are essential to sustainable energy, as they generally strengthen energy security and emit far fewer greenhouse gases than fossil fuels. Renewable energy projects sometimes raise significant sustainability concerns, such as risks to biodiversity when areas of high ecological value are converted to bioenergy production or wind or solar farms. Hydropower is the largest source of renewable electricity while solar and wind energy are growing rapidly. Photovoltaic solar and onshore wind are the cheapest forms of new power generation capacity in most countries. For more than half of the 770 million people who currently lack access to electricity, decentralised renewable energy such as solar-powered mini-grids is likely the cheapest method of providing it by 2030. United Nations targets for 2030 include substantially increasing the proportion of renewable energy in the world's energy supply. According to the International Energy Agency, renewable energy sources like wind and solar power are now a commonplace source of electricity, making up 70% of all new investments made in the world's power generation. The Agency expects renewables to become the primary energy source for electricity generation globally in the next three years, overtaking coal. Solar The Sun is Earth's primary source of energy, a clean and abundantly available resource in many regions. In 2019, solar power provided around 3% of global electricity, mostly through solar panels based on photovoltaic cells (PV). Solar PV is expected to be the electricity source with the largest installed capacity worldwide by 2027. The panels are mounted on top of buildings or installed in utility-scale solar parks. Costs of solar photovoltaic cells have dropped rapidly, driving strong growth in worldwide capacity. The cost of electricity from new solar farms is competitive with, or in many places, cheaper than electricity from existing coal plants. Various projections of future energy use identify solar PV as one of the main sources of energy generation in a sustainable mix. Most components of solar panels can be easily recycled, but this is not always done in the absence of regulation. Panels typically contain heavy metals, so they pose environmental risks if put in landfills. It takes fewer than two years for a solar panel to produce as much energy as was used for its production. Less energy is needed if materials are recycled rather than mined. In concentrated solar power, solar rays are concentrated by a field of mirrors, heating a fluid. Electricity is produced from the resulting steam with a heat engine. Concentrated solar power can support dispatchable power generation, as some of the heat is typically stored to enable electricity to be generated when needed. In addition to electricity production, solar energy is used more directly; solar thermal heating systems are used for hot water production, heating buildings, drying, and desalination. Wind power Wind has been an important driver of development over millennia, providing mechanical energy for industrial processes, water pumps, and sailing ships. Modern wind turbines are used to generate electricity and provided approximately 6% of global electricity in 2019. Electricity from onshore wind farms is often cheaper than existing coal plants and competitive with natural gas and nuclear. Wind turbines can also be placed offshore, where winds are steadier and stronger than on land but construction and maintenance costs are higher. Onshore wind farms, often built in wild or rural areas, have a visual impact on the landscape. While collisions with wind turbines kill both bats and to a lesser extent birds, these impacts are lower than from other infrastructure such as windows and transmission lines. The noise and flickering light created by the turbines can cause annoyance and constrain construction near densely populated areas. Wind power, in contrast to nuclear and fossil fuel plants, does not consume water. Little energy is needed for wind turbine construction compared to the energy produced by the wind power plant itself. Turbine blades are not fully recyclable, and research into methods of manufacturing easier-to-recycle blades is ongoing. Hydropower Hydroelectric plants convert the energy of moving water into electricity. In 2020, hydropower supplied 17% of the world's electricity, down from a high of nearly 20% in the mid-to-late 20th century. In conventional hydropower, a reservoir is created behind a dam. Conventional hydropower plants provide a highly flexible, dispatchable electricity supply. They can be combined with wind and solar power to meet peaks in demand and to compensate when wind and sun are less available. Compared to reservoir-based facilities, run-of-the-river hydroelectricity generally has less environmental impact. However, its ability to generate power depends on river flow, which can vary with daily and seasonal weather. Reservoirs provide water quantity controls that are used for flood control and flexible electricity output while also providing security during drought for drinking water supply and irrigation. Hydropower ranks among the energy sources with the lowest levels of greenhouse gas emissions per unit of energy produced, but levels of emissions vary enormously between projects. The highest emissions tend to occur with large dams in tropical regions. These emissions are produced when the biological matter that becomes submerged in the reservoir's flooding decomposes and releases carbon dioxide and methane. Deforestation and climate change can reduce energy generation from hydroelectric dams. Depending on location, large dams can displace residents and cause significant local environmental damage; potential dam failure could place the surrounding population at risk. Geothermal Geothermal energy is produced by tapping into deep underground heat and harnessing it to generate electricity or to heat water and buildings. The use of geothermal energy is concentrated in regions where heat extraction is economical: a combination is needed of high temperatures, heat flow, and permeability (the ability of the rock to allow fluids to pass through). Power is produced from the steam created in underground reservoirs. Geothermal energy provided less than 1% of global energy consumption in 2020. Geothermal energy is a renewable resource because thermal energy is constantly replenished from neighbouring hotter regions and the radioactive decay of naturally occurring isotopes. On average, the greenhouse gas emissions of geothermal-based electricity are less than 5% that of coal-based electricity. Geothermal energy carries a risk of inducing earthquakes, needs effective protection to avoid water pollution, and releases toxic emissions which can be captured. Bioenergy Biomass is renewable organic material that comes from plants and animals. It can either be burned to produce heat and electricity or be converted into biofuels such as biodiesel and ethanol, which can be used to power vehicles. The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide; those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will absorb carbon dioxide from the air as they grow. However, the establishment and cultivation of bioenergy crops can displace natural ecosystems, degrade soils, and consume water resources and synthetic fertilisers. Approximately one-third of all wood used for traditional heating and cooking in tropical areas is harvested unsustainably. Bioenergy feedstocks typically require significant amounts of energy to harvest, dry, and transport; the energy usage for these processes may emit greenhouse gases. In some cases, the impacts of land-use change, cultivation, and processing can result in higher overall carbon emissions for bioenergy compared to using fossil fuels. Use of farmland for growing biomass can result in less land being available for growing food. In the United States, around 10% of motor gasoline has been replaced by corn-based ethanol, which requires a significant proportion of the harvest. In Malaysia and Indonesia, clearing forests to produce palm oil for biodiesel has led to serious social and environmental effects, as these forests are critical carbon sinks and habitats for diverse species. Since photosynthesis captures only a small fraction of the energy in sunlight, producing a given amount of bioenergy requires a large amount of land compared to other renewable energy sources. Second-generation biofuels which are produced from non-food plants or waste reduce competition with food production, but may have other negative effects including trade-offs with conservation areas and local air pollution. Relatively sustainable sources of biomass include algae, waste, and crops grown on soil unsuitable for food production. Carbon capture and storage technology can be used to capture emissions from bioenergy power plants. This process is known as bioenergy with carbon capture and storage (BECCS) and can result in net carbon dioxide removal from the atmosphere. However, BECCS can also result in net positive emissions depending on how the biomass material is grown, harvested, and transported. Deployment of BECCS at scales described in some climate change mitigation pathways would require converting large amounts of cropland. Marine energy Marine energy has the smallest share of the energy market. It includes OTEC, tidal power, which is approaching maturity, and wave power, which is earlier in its development. Two tidal barrage systems in France and in South Korea make up 90% of global production. While single marine energy devices pose little risk to the environment, the impacts of larger devices are less well known. Non-renewable energy sources Fossil fuel switching and mitigation Switching from coal to natural gas has advantages in terms of sustainability. For a given unit of energy produced, the life-cycle greenhouse-gas emissions of natural gas are around 40 times the emissions of wind or nuclear energy but are much less than coal. Burning natural gas produces around half the emissions of coal when used to generate electricity and around two-thirds the emissions of coal when used to produce heat. Natural gas combustion also produces less air pollution than coal. However, natural gas is a potent greenhouse gas in itself, and leaks during extraction and transportation can negate the advantages of switching away from coal. The technology to curb methane leaks is widely available but it is not always used. Switching from coal to natural gas reduces emissions in the short term and thus contributes to climate change mitigation. However, in the long term it does not provide a path to net-zero emissions. Developing natural gas infrastructure risks carbon lock-in and stranded assets, where new fossil infrastructure either commits to decades of carbon emissions, or has to be written off before it makes a profit. The greenhouse gas emissions of fossil fuel and biomass power plants can be significantly reduced through carbon capture and storage (CCS). Most studies use a working assumption that CCS can capture 85–90% of the carbon dioxide () emissions from a power plant. Even if 90% of emitted is captured from a coal-fired power plant, its uncaptured emissions are still many times greater than the emissions of nuclear, solar or wind energy per unit of electricity produced. Since coal plants using CCS are less efficient, they require more coal and thus increase the pollution associated with mining and transporting coal. CCS is one of the most expensive ways of reducing emissions in the energy sector. Deployment of this technology is very limited. As of 2024, CCS is used in only 5 power plants and in 39 other facilities. Nuclear power Nuclear power has been used since the 1950s as a low-carbon source of baseload electricity. Nuclear power plants in over 30 countries generate about 10% of global electricity. As of 2019, nuclear generated over a quarter of all low-carbon energy, making it the second largest source after hydropower. Nuclear power's lifecycle greenhouse gas emissions—including the mining and processing of uranium—are similar to the emissions from renewable energy sources. Nuclear power uses little land per unit of energy produced, compared to the major renewables. Additionally, Nuclear power does not create local air pollution. Although the uranium ore used to fuel nuclear fission plants is a non-renewable resource, enough exists to provide a supply for hundreds to thousands of years. However, uranium resources that can be accessed in an economically feasible manner, at the present state, are limited and uranium production could hardly keep up during the expansion phase. Climate change mitigation pathways consistent with ambitious goals typically see an increase in power supply from nuclear. There is controversy over whether nuclear power is sustainable, in part due to concerns around nuclear waste, nuclear weapon proliferation, and accidents. Radioactive nuclear waste must be managed for thousands of years. For each unit of energy produced, nuclear energy has caused far fewer accidental and pollution-related deaths than fossil fuels, and the historic fatality rate of nuclear is comparable to renewable sources. Public opposition to nuclear energy often makes nuclear plants politically difficult to implement. Reducing the time and the cost of building new nuclear plants have been goals for decades but costs remain high and timescales long. Various new forms of nuclear energy are in development, hoping to address the drawbacks of conventional plants. Fast breeder reactors are capable of recycling nuclear waste and therefore can significantly reduce the amount of waste that requires geological disposal, but have not yet been deployed on a large-scale commercial basis. Nuclear power based on thorium (rather than uranium) may be able to provide higher energy security for countries that do not have a large supply of uranium. Small modular reactors may have several advantages over current large reactors: It should be possible to build them faster and their modularization would allow for cost reductions via learning-by-doing. Several countries are attempting to develop nuclear fusion reactors, which would generate small amounts of waste and no risk of explosions. Although fusion power has taken steps forward in the lab, the multi-decade timescale needed to bring it to commercialization and then scale means it will not contribute to a 2050 net zero goal for climate change mitigation. Energy system transformation Decarbonisation of the global energy system The emissions reductions necessary to keep global warming below 2°C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change. For example, transitioning from oil to solar power as the energy source for cars requires the generation of solar electricity, modifications to the electrical grid to accommodate fluctuations in solar panel output or the introduction of variable battery chargers and higher overall demand, adoption of electric cars, and networks of electric vehicle charging facilities and repair shops. Many climate change mitigation pathways envision three main aspects of a low-carbon energy system: The use of low-emission energy sources to produce electricity Electrification – that is increased use of electricity instead of directly burning fossil fuels Accelerated adoption of energy efficiency measures Some energy-intensive technologies and processes are difficult to electrify, including aviation, shipping, and steelmaking. There are several options for reducing the emissions from these sectors: biofuels and synthetic carbon-neutral fuels can power many vehicles that are designed to burn fossil fuels, however biofuels cannot be sustainably produced in the quantities needed and synthetic fuels are currently very expensive. For some applications, the most prominent alternative to electrification is to develop a system based on sustainably-produced hydrogen fuel. Full decarbonisation of the global energy system is expected to take several decades and can mostly be achieved with existing technologies. In the IEA's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023. Technologies that are relatively immature include batteries and processes to create carbon-neutral fuels. Developing new technologies requires research and development, demonstration, and cost reductions via deployment. The transition to a zero-carbon energy system will bring strong co-benefits for human health: The World Health Organization estimates that efforts to limit global warming to 1.5 °C could save millions of lives each year from reductions to air pollution alone. With good planning and management, pathways exist to provide universal access to electricity and clean cooking by 2030 in ways that are consistent with climate goals. Historically, several countries have made rapid economic gains through coal usage. However, there remains a window of opportunity for many poor countries and regions to "leapfrog" fossil fuel dependency by developing their energy systems based on renewables, given adequate international investment and knowledge transfer. Integrating variable energy sources To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems require flexibility. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. As larger amounts of solar and wind energy are integrated into the grid, changes have to be made to the energy system to ensure that the supply of electricity is matched to demand. In 2019, these sources generated 8.5% of worldwide electricity, a share that has grown rapidly. There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale: there is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines allows for further cancelling out of variability. Energy demand can be shifted in time through energy demand management and the use of smart grids, matching the times when variable energy production is highest. With grid energy storage, energy produced in excess can be released when needed. Further flexibility could be provided from sector coupling, that is coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles. Building overcapacity for wind and solar generation can help ensure that enough electricity is produced even during poor weather. In optimal weather, energy generation may have to be curtailed if excess electricity cannot be used or stored. The final demand-supply mismatch may be covered by using dispatchable energy sources such as hydropower, bioenergy, or natural gas. Energy storage Energy storage helps overcome barriers to intermittent renewable energy and is an important aspect of a sustainable energy system. The most commonly used and available storage method is pumped-storage hydroelectricity, which requires locations with large differences in height and access to water. Batteries, especially lithium-ion batteries, are also deployed widely. Batteries typically store electricity for short periods; research is ongoing into technology with sufficient capacity to last through seasons. Costs of utility-scale batteries in the US have fallen by around 70% since 2015, however the cost and low energy density of batteries makes them impractical for the very large energy storage needed to balance inter-seasonal variations in energy production. Pumped hydro storage and power-to-gas (converting electricity to gas and back) with capacity for multi-month usage has been implemented in some locations. Electrification Compared to the rest of the energy system, emissions can be reduced much faster in the electricity sector. As of 2019, 37% of global electricity is produced from low-carbon sources (renewables and nuclear energy). Fossil fuels, primarily coal, produce the rest of the electricity supply. One of the easiest and fastest ways to reduce greenhouse gas emissions is to phase out coal-fired power plants and increase renewable electricity generation. Climate change mitigation pathways envision extensive electrification—the use of electricity as a substitute for the direct burning of fossil fuels for heating buildings and for transport. Ambitious climate policy would see a doubling of energy share consumed as electricity by 2050, from 20% in 2020. One of the challenges in providing universal access to electricity is distributing power to rural areas. Off-grid and mini-grid systems based on renewable energy, such as small solar PV installations that generate and store enough electricity for a village, are important solutions. Wider access to reliable electricity would lead to less use of kerosene lighting and diesel generators, which are currently common in the developing world. Infrastructure for generating and storing renewable electricity requires minerals and metals, such as cobalt and lithium for batteries and copper for solar panels. Recycling can meet some of this demand if product lifecycles are well-designed, however achieving net zero emissions would still require major increases in mining for 17 types of metals and minerals. A small group of countries or companies sometimes dominate the markets for these commodities, raising geopolitical concerns. Most of the world's cobalt, for instance, is mined in the Democratic Republic of the Congo, a politically unstable region where mining is often associated with human rights risks. More diverse geographical sourcing may ensure a more flexible and less brittle supply chain. Hydrogen Hydrogen gas is widely discussed in the context of energy, as an energy carrier with potential to reduce greenhouse gas emissions. This requires hydrogen to be produced cleanly, in quantities to supply in sectors and applications where cheaper and more energy efficient mitigation alternatives are limited. These applications include heavy industry and long-distance transport. Hydrogen can be deployed as an energy source in fuel cells to produce electricity, or via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapour. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides. The overall lifecycle emissions of hydrogen depend on how it is produced. Nearly all of the world's current supply of hydrogen is created from fossil fuels. The main method is steam methane reforming, in which hydrogen is produced from a chemical reaction between steam and methane, the main component of natural gas. Producing one tonne of hydrogen through this process emits 6.6–9.3 tonnes of carbon dioxide. While carbon capture and storage (CCS) could remove a large fraction of these emissions, the overall carbon footprint of hydrogen from natural gas is difficult to assess , in part because of emissions (including vented and fugitive methane) created in the production of the natural gas itself. Electricity can be used to split water molecules, producing sustainable hydrogen provided the electricity was generated sustainably. However, this electrolysis process is currently more expensive than creating hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Hydrogen can be produced when there is a surplus of variable renewable electricity, then stored and used to generate heat or to re-generate electricity. It can be further transformed into liquid fuels such as green ammonia and green methanol. Innovation in hydrogen electrolysers could make large-scale production of hydrogen from electricity more cost-competitive. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. For steelmaking, hydrogen can function as a clean energy carrier and simultaneously as a low-carbon catalyst replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles. For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future. Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle. Energy usage technologies Transport Transport accounts for 14% of global greenhouse gas emissions, but there are multiple ways to make transport more sustainable. Public transport typically emits fewer greenhouse gases per passenger than personal vehicles, since trains and buses can carry many more passengers at once. Short-distance flights can be replaced by high-speed rail, which is more efficient, especially when electrified. Promoting non-motorised transport such as walking and cycling, particularly in cities, can make transport cleaner and healthier. The energy efficiency of cars has increased over time, but shifting to electric vehicles is an important further step towards decarbonising transport and reducing air pollution. A large proportion of traffic-related air pollution consists of particulate matter from road dust and the wearing-down of tyres and brake pads. Substantially reducing pollution from these non-tailpipe sources cannot be achieved by electrification; it requires measures such as making vehicles lighter and driving them less. Light-duty cars in particular are a prime candidate for decarbonization using battery technology. 25% of the world's emissions still originate from the transportation sector. Long-distance freight transport and aviation are difficult sectors to electrify with current technologies, mostly because of the weight of batteries needed for long-distance travel, battery recharging times, and limited battery lifespans. Where available, freight transport by ship and rail is generally more sustainable than by air and by road. Hydrogen vehicles may be an option for larger vehicles such as lorries. Many of the techniques needed to lower emissions from shipping and aviation are still early in their development, with ammonia (produced from hydrogen) a promising candidate for shipping fuel. Aviation biofuel may be one of the better uses of bioenergy if emissions are captured and stored during manufacture of the fuel. Buildings Over one-third of energy use is in buildings and their construction. To heat buildings, alternatives to burning fossil fuels and biomass include electrification through heat pumps or electric heaters, geothermal energy, central solar heating, reuse of waste heat, and seasonal thermal energy storage. Heat pumps provide both heat and air conditioning through a single appliance. The IEA estimates heat pumps could provide over 90% of space and water heating requirements globally. A highly efficient way to heat buildings is through district heating, in which heat is generated in a centralised location and then distributed to multiple buildings through insulated pipes. Traditionally, most district heating systems have used fossil fuels, but modern and cold district heating systems are designed to use high shares of renewable energy.Cooling of buildings can be made more efficient through passive building design, planning that minimises the urban heat island effect, and district cooling systems that cool multiple buildings with piped cold water. Air conditioning requires large amounts of electricity and is not always affordable for poorer households. Some air conditioning units still use refrigerants that are greenhouse gases, as some countries have not ratified the Kigali Amendment to only use climate-friendly refrigerants. Cooking In developing countries where populations suffer from energy poverty, polluting fuels such as wood or animal dung are often used for cooking. Cooking with these fuels is generally unsustainable, because they release harmful smoke and because harvesting wood can lead to forest degradation. The universal adoption of clean cooking facilities, which are already ubiquitous in rich countries, would dramatically improve health and have minimal negative effects on climate. Clean cooking facilities, e.g. cooking facilities that produce less indoor soot, typically use natural gas, liquefied petroleum gas (both of which consume oxygen and produce carbon-dioxide) or electricity as the energy source; biogas systems are a promising alternative in some contexts. Improved cookstoves that burn biomass more efficiently than traditional stoves are an interim solution where transitioning to clean cooking systems is difficult. Industry Over one-third of energy use is by industry. Most of that energy is deployed in thermal processes: generating heat, drying, and refrigeration. The share of renewable energy in industry was 14.5% in 2017—mostly low-temperature heat supplied by bioenergy and electricity. The most energy-intensive activities in industry have the lowest shares of renewable energy, as they face limitations in generating heat at temperatures over . For some industrial processes, commercialisation of technologies that have not yet been built or operated at full scale will be needed to eliminate greenhouse gas emissions. Steelmaking, for instance, is difficult to electrify because it traditionally uses coke, which is derived from coal, both to create very high-temperature heat and as an ingredient in the steel itself. The production of plastic, cement, and fertilisers also requires significant amounts of energy, with limited possibilities available to decarbonise. A switch to a circular economy would make industry more sustainable as it involves recycling more and thereby using less energy compared to investing energy to mine and refine new raw materials. Government policies Well-designed government policies that promote energy system transformation can lower greenhouse gas emissions and improve air quality simultaneously, and in many cases can also increase energy security and lessen the financial burden of using energy. Environmental regulations have been used since the 1970s to promote more sustainable use of energy. Some governments have committed to dates for phasing out coal-fired power plants and ending new fossil fuel exploration. Governments can require that new cars produce zero emissions, or new buildings are heated by electricity instead of gas. Renewable portfolio standards in several countries require utilities to increase the percentage of electricity they generate from renewable sources. Governments can accelerate energy system transformation by leading the development of infrastructure such as long-distance electrical transmission lines, smart grids, and hydrogen pipelines. In transport, appropriate infrastructure and incentives can make travel more efficient and less car-dependent. Urban planning that discourages sprawl can reduce energy use in local transport and buildings while enhancing quality of life. Government-funded research, procurement, and incentive policies have historically been critical to the development and maturation of clean energy technologies, such as solar and lithium batteries. In the IEA's scenario for a net zero-emission energy system by 2050, public funding is rapidly mobilised to bring a range of newer technologies to the demonstration phase and to encourage deployment. Carbon pricing (such as a tax on emissions) gives industries and consumers an incentive to reduce emissions while letting them choose how to do so. For example, they can shift to low-emission energy sources, improve energy efficiency, or reduce their use of energy-intensive products and services. Carbon pricing has encountered strong political pushback in some jurisdictions, whereas energy-specific policies tend to be politically safer. Most studies indicate that to limit global warming to 1.5°C, carbon pricing would need to be complemented by stringent energy-specific policies. As of 2019, the price of carbon in most regions is too low to achieve the goals of the Paris Agreement. Carbon taxes provide a source of revenue that can be used to lower other taxes or help lower-income households afford higher energy costs. Some governments, such as the EU and the UK, are exploring the use of carbon border adjustments. These place tariffs on imports from countries with less stringent climate policies, to ensure that industries subject to internal carbon prices remain competitive. The scale and pace of policy reforms that have been initiated as of 2020 are far less than needed to fulfil the climate goals of the Paris Agreement. In addition to domestic policies, greater international cooperation is required to accelerate innovation and to assist poorer countries in establishing a sustainable path to full energy access. Countries may support renewables to create jobs. The International Labour Organization estimates that efforts to limit global warming to 2 °C would result in net job creation in most sectors of the economy. It predicts that 24 million new jobs would be created by 2030 in areas such as renewable electricity generation, improving energy-efficiency in buildings, and the transition to electric vehicles. Six million jobs would be lost, in sectors such as mining and fossil fuels. Governments can make the transition to sustainable energy more politically and socially feasible by ensuring a just transition for workers and regions that depend on the fossil fuel industry, to ensure they have alternative economic opportunities. Finance Raising enough money for innovation and investment is a prerequisite for the energy transition. The IPCC estimates that to limit global warming to 1.5 °C, US$2.4 trillion would need to be invested in the energy system each year between 2016 and 2035. Most studies project that these costs, equivalent to 2.5% of world GDP, would be small compared to the economic and health benefits. Average annual investment in low-carbon energy technologies and energy efficiency would need to be six times more by 2050 compared to 2015. Underfunding is particularly acute in the least developed countries, which are not attractive to the private sector. The United Nations Framework Convention on Climate Change estimates that climate financing totalled $681 billion in 2016. Most of this is private-sector investment in renewable energy deployment, public-sector investment in sustainable transport, and private-sector investment in energy efficiency. The Paris Agreement includes a pledge of an extra $100 billion per year from developed countries to poor countries, to do climate change mitigation and adaptation. This goal has not been met and measurement of progress has been hampered by unclear accounting rules. If energy-intensive businesses like chemicals, fertilizers, ceramics, steel, and non-ferrous metals invest significantly in R&D, its usage in industry might amount to between 5% and 20% of all energy used. Fossil fuel funding and subsidies are a significant barrier to the energy transition. Direct global fossil fuel subsidies were $319 billion in 2017. This rises to $5.2 trillion when indirect costs are priced in, like the effects of air pollution. Ending these could lead to a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Funding for clean energy has been largely unaffected by the COVID-19 pandemic, and pandemic-related economic stimulus packages offer possibilities for a green recovery.
Technology
Energy: General
null
1056044
https://en.wikipedia.org/wiki/Nimravidae
Nimravidae
Nimravidae is an extinct family of carnivorans, sometimes known as false saber-toothed cats, whose fossils are found in North America and Eurasia. Not considered to belong to the true cats (family Felidae), the nimravids are generally considered closely related and classified as a distinct family in the suborder Feliformia. Fossils have been dated from the Middle Eocene through the Late Miocene epochs (Bartonian through Tortonian stages, 40.4–7.2 million years ago), spanning about . The barbourofelids, which were formerly classified as a subfamily of the Nimravidae, were reassigned to their own distinct family Barbourofelidae in 2004. However, some recent (2020) studies suggest the barbourofelids are a branch of the nimravids, suggesting that this debate might not be settled yet. Morphology and evolution Most nimravids had muscular, low-slung, cat-like bodies, with shorter legs and tails than are typical of cats. Unlike extant Feliformia, the nimravids had a different bone structure in the small bones of the ear. The middle ear of true cats is housed in an external structure called an auditory bulla, which is separated by a septum into two chambers. Nimravid remains show ossified bullae with no septum, or no trace at all of the entire bulla. They are assumed to have had a cartilaginous housing of the ear mechanism. Nimravid feet were short, indicating they walked in a plantigrade or semiplantigrade posture, i.e., on the flat of the feet rather than the toes, like modern cats. Although some nimravids physically resembled the saber-toothed cats, such as Smilodon, they were not closely related, but evolved a similar form through parallel evolution. They possessed synapomorphies with the barbourofelids in the cranium, mandible, dentition, and postcranium. They also had a downward-projecting flange on the front of the mandible as long as the canine teeth, a feature which also convergently evolved in the saber-toothed sparassodont Thylacosmilus. The ancestors of nimravids and cats diverged from a common ancestor soon after the Caniformia–Feliformia split, in the middle Eocene about 50 million years ago (Mya), with a minimum constraint of 43 Mya. Recognizable nimravid fossils date from the late Eocene (37 Mya), from the Chadronian White River Formation at Flagstaff Rim, Wyoming, to the late Miocene (5 Mya). Nimravid diversity appears to have peaked about 28 Mya. A 2021 study has shown that a sizeable number of species developed feline-like morphologies in addition to saber-toothed taxa. Taxonomy The family Nimravidae was named by American paleontologist Edward Drinker Cope in 1880, with the type genus as Nimravus. The family was assigned to Fissipedia by Cope (1889); to Caniformia by Flynn and Galiano (1982); to Aeluroidea by Carroll (1988); to Feliformia by Bryant (1991); and to Carnivoramorpha, by Wesley-Hunt and Werdelin (2005). Nimravids are placed in tribes by some authors to reflect closer relationships between genera within the family. Some nimravids evolved into large, toothed, cat-like forms with massive flattened upper canines and accompanying mandibular flanges. Some had dentition similar to felids, or modern cats, with smaller canines. Others had moderately increased canines in a more intermediate relationship between the saber-toothed cats and felids. The upper canines were not only shorter, but also more conical, than those of the true saber-toothed cats (Machairodontinae). These nimravids are referred to as "false saber-tooths". Not only did nimravids exhibit diverse dentition, but they also showed the same diversity in size and morphology as cats. Some were leopard-sized, others the size of today's lions and tigers, one had the short face, rounded skull, and smaller canines of the modern cheetah, and one, Nanosmilus, was only the size of a small bobcat. The Barbourofelids were for a while no longer included in Nimravidae, following elevation to family as sister clade to the true cats (family Felidae). However, several recent studies have returned them to Nimravidae, including as part of Nimravinae. Phylogeny The phylogenetic relationships of Nimravidae are shown in the following cladogram: A 2021 study divides Nimravidae into Hoplophoninae and Nimravinae, the latter including the bulk of species in addition to barbourofelids. Phylogeny of Nimravidae from the 2022 description of Pangurban: Natural history Nimravids appeared in the middle of the Eocene epoch, about 40 Mya, in North America and Asia. The global climate at this time was warm and wet, but was trending cooler and drier toward the late Eocene. The lush forests of the Eocene were transforming to scrub and open woodland. This climatic trend continued in the Oligocene, and nimravids evidently flourished in this environment. North America and Asia were connected and shared much related fauna. Europe in the Oligocene was more of an archipelago than a continent, though some land bridges must have existed, for nimravids also spread there. In the Miocene, the fossil record suggests that many animals suited for living in forest or woodland were replaced by grazers suited for grassland. This suggests that much of North America and Asia became dominated by savanna. Nimravids disappeared along with the woodlands, but survived in relictual humid forests in Europe to the late Miocene. When conditions ultimately changed there in the late Miocene, the last nimravids disappeared about 9 Mya.
Biology and health sciences
Other carnivora
Animals
178649
https://en.wikipedia.org/wiki/General%20topology
General topology
In mathematics, general topology (or point set topology) is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. The fundamental concepts in point-set topology are continuity, compactness, and connectedness: Continuous functions, intuitively, take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The terms 'nearby', 'arbitrarily small', and 'far apart' can all be made precise by using the concept of open sets. If we change the definition of 'open set', we change what continuous functions, compact sets, and connected sets are. Each choice of definition for 'open set' is called a topology. A set with a topology is called a topological space. Metric spaces are an important class of topological spaces where a real, non-negative distance, also called a metric, can be defined on pairs of points in the set. Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces. History General topology grew out of a number of areas, most importantly the following: the detailed study of subsets of the real line (once known as the topology of point sets; this usage is now obsolete) the introduction of the manifold concept the study of metric spaces, especially normed linear spaces, in the early days of functional analysis. General topology assumed its present form around 1940. It captures, one might say, almost everything in the intuition of continuity, in a technically adequate form that can be applied in any area of mathematics. A topology on a set Let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: Both the empty set and X are elements of τ Any union of elements of τ is an element of τ Any intersection of finitely many elements of τ is an element of τ If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (i.e., its complement is open). A subset of X may be open, closed, both (clopen set), or neither. The empty set and X itself are always both closed and open. Basis for a topology A base (or basis) B for a topological space X with topology T is a collection of open sets in T such that every open set in T can be written as a union of elements of B. We say that the base generates the topology T. Bases are useful because many properties of topologies can be reduced to statements about a base that generates that topology—and because many topologies are most easily defined in terms of a base that generates them. Subspace and quotient Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. A quotient space is defined as follows: if X is a topological space and Y is a set, and if f : X→ Y is a surjective function, then the quotient topology on Y is the collection of subsets of Y that have open inverse images under f. In other words, the quotient topology is the finest topology on Y for which f is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space X. The map f is then the natural projection onto the set of equivalence classes. Examples of topological spaces A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Discrete and trivial topologies Any set can be given the discrete topology, in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must be Hausdorff spaces where limit points are unique. Cofinite and cocountable topologies Any set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T1 topology on any infinite set. Any set can be given the cocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. Topologies on the real and complex numbers There are many ways to define a topology on R, the set of real numbers. The standard topology on R is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces Rn can be given a topology. In the usual topology on Rn the basic open sets are the open balls. Similarly, C, the set of complex numbers, and Cn have a standard topology in which the basic open sets are open balls. The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals [a, b). This topology on R is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. The metric topology Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms. Further examples There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Every manifold has a natural topology, since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from Rn. The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On Rn or Cn, the closed sets of the Zariski topology are the solution sets of systems of polynomial equations. A linear graph has a natural topology that generalises many of the geometric aspects of graphs with vertices and edges. Many sets of linear operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. Any local field has a topology native to it, and this can be extended to vector spaces over that field. The Sierpiński space is the simplest non-discrete topological space. It has important relations to the theory of computation and semantics. If Γ is an ordinal number, then the set Γ = [0, Γ) may be endowed with the order topology generated by the intervals (a, b), [0, b) and (a, Γ) where a and b are elements of Γ. Continuous functions Continuity is expressed in terms of neighborhoods: is continuous at some point if and only if for any neighborhood of , there is a neighborhood of such that . Intuitively, continuity means no matter how "small" becomes, there is always a containing that maps inside and whose image under contains . This is equivalent to the condition that the preimages of the open (closed) sets in are open (closed) in . In metric spaces, this definition is equivalent to the ε–δ-definition that is often used in analysis. An extreme example: if a set is given the discrete topology, all functions to any topological space are continuous. On the other hand, if is equipped with the indiscrete topology and the space set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose range is indiscrete is continuous. Alternative definitions Several equivalent definitions for a topological structure exist and thus there are several equivalent ways to define a continuous function. Neighborhood definition Definitions based on preimages are often difficult to use directly. The following criterion expresses continuity in terms of neighborhoods: f is continuous at some point x ∈ X if and only if for any neighborhood V of f(x), there is a neighborhood U of x such that f(U) ⊆ V. Intuitively, continuity means no matter how "small" V becomes, there is always a U containing x that maps inside V. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above δ-ε definition of continuity in the context of metric spaces. However, in general topological spaces, there is no notion of nearness or distance. Note, however, that if the target space is Hausdorff, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous. Sequences and nets In several contexts, the topology of a space is conveniently specified in terms of limit points. In many instances, this is accomplished by specifying when a point is the limit of a sequence, but for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f: X → Y is sequentially continuous if whenever a sequence (xn) in X converges to a limit x, the sequence (f(xn)) converges to f(x). Thus sequentially continuous functions "preserve sequential limits". Every continuous function is sequentially continuous. If X is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X is a metric space, sequential continuity and continuity are equivalent. For non first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve limits of nets, and in fact this property characterizes continuous functions. Closure operator definition Instead of specifying the open subsets of a topological space, the topology can also be determined by a closure operator (denoted cl), which assigns to any subset A ⊆ X its closure, or an interior operator (denoted int), which assigns to any subset A of X its interior. In these terms, a function between topological spaces is continuous in the sense above if and only if for all subsets A of X That is to say, given any element x of X that is in the closure of any subset A, f(x) belongs to the closure of f(A). This is equivalent to the requirement that for all subsets A' of X' Moreover, is continuous if and only if for any subset A of X. Properties If f: X → Y and g: Y → Z are continuous, then so is the composition g ∘ f: X → Z. If f: X → Y is continuous and X is compact, then f(X) is compact. X is connected, then f(X) is connected. X is path-connected, then f(X) is path-connected. X is Lindelöf, then f(X) is Lindelöf. X is separable, then f(X) is separable. The possible topologies on a fixed set X are partially ordered: a topology τ1 is said to be coarser than another topology τ2 (notation: τ1 ⊆ τ2) if every open subset with respect to τ1 is also open with respect to τ2. Then, the identity map idX: (X, τ2) → (X, τ1) is continuous if and only if τ1 ⊆ τ2 (see also comparison of topologies). More generally, a continuous function stays continuous if the topology τY is replaced by a coarser topology and/or τX is replaced by a finer topology. Homeomorphisms Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. In fact, if an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f−1 need not be continuous. A bijective continuous function with continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. Defining topologies via continuous functions Given a function where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f−1(A) is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus the final topology can be characterized as the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space X, the initial topology on S has a basis of open sets given by those sets of the form f^(-1)(U) where U is open in X . If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus the initial topology can be characterized as the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. A topology on a set S is uniquely determined by the class of all continuous functions into all topological spaces X. Dually, a similar idea can be applied to maps Compact sets Formally, a topological space X is called compact if each of its open covers has a finite subcover. Otherwise it is called non-compact. Explicitly, this means that for every arbitrary collection of open subsets of such that there is a finite subset of such that Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta. Every closed interval in R of finite length is compact. More is true: In Rn, a set is compact if and only if it is closed and bounded. (See Heine–Borel theorem). Every continuous image of a compact space is compact. A compact subset of a Hausdorff space is closed. Every continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism. Every sequence of points in a compact metric space has a convergent subsequence. Every compact finite-dimensional manifold can be embedded in some Euclidean space Rn. Connected sets A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice. For a topological space X the following conditions are equivalent: X is connected. X cannot be divided into two disjoint nonempty closed sets. The only subsets of X that are both open and closed (clopen sets) are X and the empty set. The only subsets of X with empty boundary are X and the empty set. X cannot be written as the union of two nonempty separated sets. The only continuous functions from X to {0,1}, the two-point space endowed with the discrete topology, are constant. Every interval in R is connected. The continuous image of a connected space is connected. Connected components The maximal connected subsets (ordered by inclusion) of a nonempty topological space are called the connected components of the space. The components of any topological space X form a partition of X: they are disjoint, nonempty, and their union is the whole space. Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets, which are not open. Let be the connected component of x in a topological space X, and be the intersection of all open-closed sets containing x (called quasi-component of x.) Then where the equality holds if X is compact Hausdorff or locally connected. Disconnected spaces A space in which all components are one-point sets is called totally disconnected. Related to this property, a space X is called totally separated if, for any two distinct elements x and y of X, there exist disjoint open neighborhoods U of x and V of y such that X is the union of U and V. Clearly any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers Q, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff. Path-connected sets A path from a point x to a point y in a topological space X is a continuous function f from the unit interval [0,1] to X with f(0) = x and f(1) = y. A path-component of X is an equivalence class of X under the equivalence relation, which makes x equivalent to y if there is a path from x to y. The space X is said to be path-connected (or pathwise connected or 0-connected) if there is at most one path-component; that is, if there is a path joining any two points in X. Again, many authors exclude the empty space. Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line L* and the topologist's sine curve. However, subsets of the real line R are connected if and only if they are path-connected; these subsets are the intervals of R. Also, open subsets of Rn or Cn are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. Products of spaces Given X such that is the Cartesian product of the topological spaces Xi, indexed by , and the canonical projections pi : X → Xi, the product topology on X is defined as the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology. The open sets in the product topology are unions (finite or infinite) of sets of the form , where each Ui is open in Xi and Ui ≠ Xi only finitely many times. In particular, for a finite product (in particular, for the product of two topological spaces), the products of base elements of the Xi gives a basis for the product . The product topology on X is the topology generated by sets of the form pi−1(U), where i is in I and U is an open subset of Xi. In other words, the sets {pi−1(U)} form a subbase for the topology on X. A subset of X is open if and only if it is a (possibly infinite) union of intersections of finitely many sets of the form pi−1(U). The pi−1(U) are sometimes called open cylinders, and their intersections are cylinder sets. In general, the product of the topologies of each Xi forms a basis for what is called the box topology on X. In general, the box topology is finer than the product topology, but for finite products they coincide. Related to compactness is Tychonoff's theorem: the (arbitrary) product of compact spaces is compact. Separation axioms Many of these names have alternative meanings in some of mathematical literature, as explained on History of the separation axioms; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous. Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles. In all of the following definitions, X is again a topological space. X is T0, or Kolmogorov, if any two distinct points in X are topologically distinguishable. (It is a common theme among the separation axioms to have one version of an axiom that requires T0 and one version that doesn't.) X is T1, or accessible or Fréchet, if any two distinct points in X are separated. Thus, X is T1 if and only if it is both T0 and R0. (Though you may say such things as T1 space, Fréchet topology, and Suppose that the topological space X is Fréchet, avoid saying Fréchet space in this context, since there is another entirely different notion of Fréchet space in functional analysis.) X is Hausdorff, or T2 or separated, if any two distinct points in X are separated by neighbourhoods. Thus, X is Hausdorff if and only if it is both T0 and R1. A Hausdorff space must also be T1. X is T2½, or Urysohn, if any two distinct points in X are separated by closed neighbourhoods. A T2½ space must also be Hausdorff. X is regular, or T3, if it is T0 and if given any point x and closed set F in X such that x does not belong to F, they are separated by neighbourhoods. (In fact, in a regular space, any such x and F is also separated by closed neighbourhoods.) X is Tychonoff, or T3½, completely T3, or completely regular, if it is T0 and if f, given any point x and closed set F in X such that x does not belong to F, they are separated by a continuous function. X is normal, or T4, if it is Hausdorff and if any two disjoint closed subsets of X are separated by neighbourhoods. (In fact, a space is normal if and only if any two disjoint closed sets can be separated by a continuous function; this is Urysohn's lemma.) X is completely normal, or T5 or completely T4, if it is T1 and if any two separated sets are separated by neighbourhoods. A completely normal space must also be normal. X is perfectly normal, or T6 or perfectly T4, if it is T1 and if any two disjoint closed sets are precisely separated by a continuous function. A perfectly normal Hausdorff space must also be completely normal Hausdorff. The Tietze extension theorem: In a normal space, every continuous real-valued function defined on a closed subspace can be extended to a continuous map defined on the whole space. Countability axioms An axiom of countability is a property of certain mathematical objects (usually in a category) that requires the existence of a countable set with certain properties, while without it such sets might not exist. Important countability axioms for topological spaces: sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set first-countable space: every point has a countable neighbourhood basis (local base) second-countable space: the topology has a countable base separable space: there exists a countable dense subspace Lindelöf space: every open cover has a countable subcover σ-compact space: there exists a countable cover by compact spaces Relations: Every first countable space is sequential. Every second-countable space is first-countable, separable, and Lindelöf. Every σ-compact space is Lindelöf. A metric space is first-countable. For metric spaces second-countability, separability, and the Lindelöf property are all equivalent. Metric spaces A metric space is an ordered pair where is a set and is a metric on , i.e., a function such that for any , the following holds:     (non-negative), iff     (identity of indiscernibles),     (symmetry) and     (triangle inequality) . The function is also called distance function or simply distance. Often, is omitted and one just writes for a metric space if it is clear from the context what metric is used. Every metric space is paracompact and Hausdorff, and thus normal. The metrization theorems provide necessary and sufficient conditions for a topology to come from a metric. Baire category theorem The Baire category theorem says: If X is a complete metric space or a locally compact Hausdorff space, then the interior of every union of countably many nowhere dense sets is empty. Any open subspace of a Baire space is itself a Baire space. Main areas of research Continuum theory A continuum (pl continua) is a nonempty compact connected metric space, or less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. These objects arise frequently in nearly all areas of topology and analysis, and their properties are strong enough to yield many 'geometric' features. Dynamical systems Topological dynamics concerns the behavior of a space and its subspaces over time when subjected to continuous change. Many examples with applications to physics and other areas of math include fluid dynamics, billiards and flows on manifolds. The topological characteristics of fractals in fractal geometry, of Julia sets and the Mandelbrot set arising in complex dynamics, and of attractors in differential equations are often critical to understanding these systems. Pointless topology Pointless topology (also called point-free or pointfree topology) is an approach to topology that avoids mentioning points. The name 'pointless topology' is due to John von Neumann. The ideas of pointless topology are closely related to mereotopologies, in which regions (sets) are treated as foundational without explicit reference to underlying point sets. Dimension theory Dimension theory is a branch of general topology dealing with dimensional invariants of topological spaces. Topological algebras A topological algebra A over a topological field K is a topological vector space together with a continuous multiplication that makes it an algebra over K. A unital associative topological algebra is a topological ring. The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931). Metrizability theory In topology and related areas of mathematics, a metrizable space is a topological space that is homeomorphic to a metric space. That is, a topological space is said to be metrizable if there is a metric such that the topology induced by d is . Metrization theorems are theorems that give sufficient conditions for a topological space to be metrizable. Set-theoretic topology Set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC). A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC.
Mathematics
Geometry
null
178690
https://en.wikipedia.org/wiki/Steamboat
Steamboat
A steamboat is a boat that is propelled primarily by steam power, typically driving propellers or paddlewheels. The term steamboat is used to refer to small steam-powered vessels working on lakes, rivers, and in short-sea shipping. The development of the steamboat led to the larger steamship, which is a seaworthy and often ocean-going ship. Steamboats sometimes use the prefix designation SS, S.S. or S/S (for 'Screw Steamer') or PS (for 'Paddle Steamer'); however, these designations are most often used for steamships. Background Limitations of the Newcomen steam engine The first steamboat designs used Newcomen steam engines. These engines were large, heavy, and produced little power, which resulted in an unfavorable power-to-weight ratio. The heavy weight of the Newcomen engine required a structurally strong boat, and the reciprocating motion of the engine beam required a complicated mechanism to produce propulsion. Rotary motion engines James Watt's design improvements increased the efficiency of the steam engine, improving the power-to-weight ratio, and created an engine capable of rotary motion by using a double-acting cylinder which injected steam at each end of the piston stroke to move the piston back and forth. The rotary steam engine simplified the mechanism required to turn a paddle wheel to propel a boat. Despite the improved efficiency and rotary motion, the power-to-weight ratio of Boulton and Watt steam engine was still low. High-pressure steam engines The high-pressure steam engine was the development that made the steamboat practical. It had a high power-to-weight ratio and was fuel efficient. High pressure engines were made possible by improvements in the design of boilers and engine components so that they could withstand internal pressure, although boiler explosions were common due to lack of instrumentation like pressure gauges. Attempts at making high-pressure engines had to wait until the expiration of the Boulton and Watt patent in 1800. Shortly thereafter high-pressure engines by Richard Trevithick and Oliver Evans were introduced. Compound or multiple expansion steam engines The compound steam engine became widespread in the late 19th century. Compounding uses exhaust steam from a high pressure cylinder to a lower pressure cylinder and greatly improves efficiency. With compound engines it was possible for trans ocean steamers to carry less coal than freight. Compound steam engine powered ships enabled a great increase in international trade. Steam turbines The most efficient steam engine used for marine propulsion is the steam turbine. It was developed near the end of the 19th century and was used throughout the 20th century. History Early designs An apocryphal story from 1851 attributes the earliest steamboat to Denis Papin for a boat he built in 1705. Papin was an early innovator in steam power and the inventor of the steam digester, the first pressure cooker, which played an important role in James Watt's steam experiments. However, Papin's boat was not steam-powered but powered by hand-cranked paddles. A steamboat was described and patented by English physician John Allen in 1729. In 1736, Jonathan Hulls was granted a patent in England for a Newcomen engine-powered steamboat (using a pulley instead of a beam, and a pawl and ratchet to obtain rotary motion), but it was the improvement in steam engines by James Watt that made the concept feasible. William Henry of Lancaster, Pennsylvania, having learned of Watt's engine on a visit to England, made his own engine, and put it in a boat. The boat sank, and while Henry made an improved model, he did not appear to have much success, though he may have inspired others. The first steam-powered ship, Pyroscaphe, was a paddle steamer powered by a double-acting steam engine; it was built in France in 1783 by Marquis Claude de Jouffroy and his colleagues as an improvement of an earlier attempt, the 1776 Palmipède. At its first demonstration on 15 July 1783, Pyroscaphe travelled upstream on the river Saône for some fifteen minutes before the engine failed. Presumably this was easily repaired as the boat is said to have made several such journeys. Following this, De Jouffroy attempted to get the government interested in his work, but for political reasons was instructed that he would have to build another version on the Seine in Paris. De Jouffroy did not have the funds for this, and, following the events of the French revolution, work on the project was discontinued after he left the country. Similar boats were made in 1785 by John Fitch in Philadelphia and William Symington in Dumfries, Scotland. Fitch successfully trialled his boat in 1787, and in 1788, he began operating a regular commercial service along the Delaware River between Philadelphia and Burlington, New Jersey, carrying as many as 30 passengers. This boat could typically make and travelled more than during its short length of service. The Fitch steamboat was not a commercial success, as this travel route was adequately covered by relatively good wagon roads. The following year, a second boat made excursions, and in 1790, a third boat ran a series of trials on the Delaware River before patent disputes dissuaded Fitch from continuing. Meanwhile, Patrick Miller of Dalswinton, near Dumfries, Scotland, had developed double-hulled boats propelled by manually cranked paddle wheels placed between the hulls, even attempting to interest various European governments in a giant warship version, long. Miller sent King Gustav III of Sweden an actual small-scale version, long, called Experiment. Miller then engaged engineer William Symington to build his patent steam engine that drove a stern-mounted paddle wheel in a boat in 1785. The boat was successfully tried out on Dalswinton Loch in 1788 and was followed by a larger steamboat the next year. Miller then abandoned the project. 19th century The failed project of Patrick Miller caught the attention of Lord Dundas, Governor of the Forth and Clyde Canal Company, and at a meeting with the canal company's directors on 5 June 1800, they approved his proposals for the use of "a model of a boat by Captain Schank to be worked by a steam engine by Mr Symington" on the canal. The boat was built by Alexander Hart at Grangemouth to Symington's design with a vertical cylinder engine and crosshead transmitting power to a crank driving the paddlewheels. Trials on the River Carron in June 1801 were successful and included towing sloops from the river Forth up the Carron and thence along the Forth and Clyde Canal. In 1801, Symington patented a horizontal steam engine directly linked to a crank. He got support from Lord Dundas to build a second steamboat, which became famous as the Charlotte Dundas, named in honour of Lord Dundas's daughter. Symington designed a new hull around his powerful horizontal engine, with the crank driving a large paddle wheel in a central upstand in the hull, aimed at avoiding damage to the canal banks. The new boat was 56 ft (17.1 m) long, 18 ft (5.5 m) wide and 8 ft (2.4 m) depth, with a wooden hull. The boat was built by John Allan and the engine by the Carron Company. The first sailing was on the canal in Glasgow on 4 January 1803, with Lord Dundas and a few of his relatives and friends on board. The crowd were pleased with what they saw, but Symington wanted to make improvements and another more ambitious trial was made on 28 March. On this occasion, the Charlotte Dundas towed two 70 ton barges 30 km (almost 20 miles) along the Forth and Clyde Canal to Glasgow, and despite "a strong breeze right ahead" that stopped all other canal boats it took only nine and a quarter hours, giving an average speed of about 3 km/h (2 mph). The Charlotte Dundas was the first practical steamboat, in that it demonstrated the practicality of steam power for ships, and was the first to be followed by continuous development of steamboats. The American Robert Fulton was present at the trials of the Charlotte Dundas and was intrigued by the potential of the steamboat. While working in France, he corresponded with and was helped by the Scottish engineer Henry Bell, who may have given him the first model of his working steamboat. Fulton designed his own steamboat, which sailed along the River Seine in 1803. Fulton later obtained a Boulton and Watt steam engine, shipped to America, where his first proper steamship was built in 1807, North River Steamboat (later known as Clermont), which carried passengers between New York City and Albany, New York. Clermont was able to make the trip in 32 hours. The steamboat was powered by a Boulton and Watt engine and was capable of long-distance travel. It was the first commercially successful steamboat, transporting passengers along the Hudson River. In 1807 Robert L. Stevens began operation of the Phoenix, which used a high-pressure engine in combination with a low-pressure condensing engine. The first steamboats powered only by high pressure were the Aetna and Pennsylvania, designed and built by Oliver Evans. In October 1811 a ship designed by John Stevens, Little Juliana, would operate as the first steam-powered ferry between Hoboken and New York City. Stevens' ship was engineered as a twin-screw-driven steamboat in juxtaposition to Clermonts Boulton and Watt engine. The design was a modification of Stevens' prior paddle steamer Phoenix, the first steamship to successfully navigate the open ocean in its route from Hoboken to Philadelphia. In 1812, Henry Bell's PS Comet was inaugurated. The steamboat was the first commercial passenger service in Europe and sailed along the River Clyde in Scotland. The Margery, launched in Dumbarton in 1814, in January 1815 became the first steamboat on the River Thames, much to the amazement of Londoners. She operated a London-to-Gravesend river service until 1816, when she was sold to the French and became the first steamboat to cross the English Channel. When she reached Paris, the new owners renamed her Elise and inaugurated a Seine steamboat service. In 1818, Ferdinando I, the first Italian steamboat, left the port of Naples, where it had been built. Sea- and Ocean-going The first sea-going steamboat was Richard Wright's first steamboat "Experiment", an ex-French lugger; she steamed from Leeds to Yarmouth, arriving Yarmouth 19 July 1813. "Tug", the first tugboat, was launched by the Woods Brothers, Port Glasgow, on 5 November 1817; in the summer of 1818 she was the first steamboat to travel round the North of Scotland to the East Coast. By 1826, steamboats were employed on a large number of inland and coastal shipping lines in the United Kingdom. Some of the latter crossed the Irish Sea, others crossed the English Channel to Calais or Boulogne-sur-Mer, or crossed the North Sea to Rotterdam. At the time, the General Steam Navigation Company was one of the biggest companies that operated steamboats in short-sea shipping. The Talbot operated by GSNC on the London – Calais line had a tonnage of 156 and 60 hp. Steamships required carrying fuel (coal) at the expense of the regular payload. For this reason for some time sailships remained more economically viable for long voyages. However, as the steam engine technology improved, more power could be generated by the same quantity of fuel and longer distances could be traveled. A steamship built in 1855 required about 40% of its available cargo space to store enough coal to cross the Atlantic, but by the 1860s, transatlantic steamship services became cost-effective and steamships began to dominate these routes. By the 1870s, particularly in conjunction with the opening of the Suez Canal in 1869, South Asia became economically accessible for steamships from Europe. By the 1890s, the steamship technology so improved that steamships became economically viable even on long-distance voyages such as linking Great Britain with its Pacific Asian colonies, such as Singapore and Hong Kong. This resulted in the downfall of sailing. Use by country United States Origins The era of the steamboat in the United States began in Philadelphia in 1787 when John Fitch (1743–1798) made the first successful trial of a 45-foot (14-meter) steamboat on the Delaware River on 22 August 1787, in the presence of members of the United States Constitutional Convention. Fitch later (1790) built a larger vessel that carried passengers and freight between Philadelphia and Burlington, New Jersey on the Delaware. His steamboat was not a financial success and was shut down after a few months service, however this marks the first use of marine steam propulsion in scheduled regular passenger transport service. Oliver Evans (1755–1819) was a Philadelphian inventor born in Newport, Delaware, to a family of Welsh settlers. He designed an improved high-pressure steam engine in 1801 but did not build it (patented 1804). The Philadelphia Board of Health was concerned with the problem of dredging and cleaning the city's dockyards, and in 1805 Evans convinced them to contract with him for a steam-powered dredge, which he called the Oruktor Amphibolos. It was built but was only marginally successful. Evans's high-pressure steam engine had a much higher power-to-weight ratio, making it practical to apply it in locomotives and steamboats. Evans became so depressed with the poor protection that the US patent law gave inventors that he eventually took all his engineering drawings and invention ideas and destroyed them to prevent his children wasting their time in court fighting patent infringements. Robert Fulton constructed a steamboat to ply a route between New York City and Albany, New York on the Hudson River. He successfully obtained a monopoly on Hudson River traffic after terminating a prior 1797 agreement with John Stevens, who owned extensive land on the Hudson River in New Jersey. The former agreement had partitioned northern Hudson River traffic to Livingston and southern to Stevens, agreeing to use ships designed by Stevens for both operations. With their new monopoly, Fulton and Livingston's boat, named the Clermont after Livingston's estate, could make a profit. The Clermont was nicknamed "Fulton's Folly" by doubters. On Monday, 17 August 1807, the memorable first voyage of the Clermont up the Hudson River was begun. She traveled the trip to Albany in a little over 32 hours and made the return trip in about eight hours. The use of steamboats on major US rivers soon followed Fulton's 1807 success. In 1811, the first in a continuous (still in commercial passenger operation ) line of river steamboats left the dock at Pittsburgh to steam down the Ohio River to the Mississippi and on to New Orleans. In 1817 a consortium in Sackets Harbor, New York, funded the construction of the first US steamboat, Ontario, to run on Lake Ontario and the Great Lakes, beginning the growth of lake commercial and passenger traffic. In his book Life on the Mississippi, river pilot and author Mark Twain described much of the operation of such vessels. Types of ships By 1849 the shipping industry was in transition from sail-powered boats to steam-powered boats and from wood construction to an ever-increasing metal construction. There were basically three different types of ships being used: standard sailing ships of several different types, clippers, and paddle steamers with paddles mounted on the side or rear. River steamboats typically used rear-mounted paddles and had flat bottoms and shallow hulls designed to carry large loads on generally smooth and occasionally shallow rivers. Ocean-going paddle steamers typically used side-wheeled paddles and used narrower, deeper hulls designed to travel in the often stormy weather encountered at sea. The ship hull design was often based on the clipper ship design with extra bracing to support the loads and strains imposed by the paddle wheels when they encountered rough water. The first paddle-steamer to make a long ocean voyage was the 320-ton , built in 1819 expressly for packet ship mail and passenger service to and from Liverpool, England. On 22 May 1819, the watch on the Savannah sighted Ireland after 23 days at sea. The Allaire Iron Works of New York supplied Savannah's's engine cylinder, while the rest of the engine components and running gear were manufactured by the Speedwell Ironworks of New Jersey. The low-pressure engine was of the inclined direct-acting type, with a single cylinder and a stroke. Savannah's engine and machinery were unusually large for their time. The ship's wrought-iron paddlewheels were 16 feet in diameter with eight buckets per wheel. For fuel, the vessel carried of coal and of wood. The SS Savannah was too small to carry much fuel, and the engine was intended only for use in calm weather and to get in and out of harbors. Under favorable winds the sails alone were able to provide a speed of at least four knots. The Savannah was judged not a commercial success, and its engine was removed and it was converted back to a regular sailing ship. By 1848 steamboats built by both United States and British shipbuilders were already in use for mail and passenger service across the Atlantic Ocean—a journey. Since paddle steamers typically required from of coal per day to keep their engines running, they were more expensive to run. Initially, nearly all seagoing steamboats were equipped with mast and sails to supplement the steam engine power and provide power for occasions when the steam engine needed repair or maintenance. These steamships typically concentrated on high value cargo, mail and passengers and only had moderate cargo capabilities because of their required loads of coal. The typical paddle wheel steamship was powered by a coal burning engine that required firemen to shovel the coal to the burners. By 1849 the screw propeller had been invented and was slowly being introduced as iron increasingly was used in ship construction and the stress introduced by propellers could be compensated for. As the 1800s progressed the timber and lumber needed to make wooden ships got ever more expensive, and the iron plate needed for iron ship construction got much cheaper as the massive iron works at Merthyr Tydfil, Wales, for example, got ever more efficient. The propeller put a lot of stress on the rear of the ships and would not see widespread use till the conversion from wood boats to iron boats was complete—well underway by 1860. By the 1840s the ocean-going steam ship industry was well established as the Cunard Line and others demonstrated. The last sailing frigate of the US Navy, , had been launched in 1855. West Coast In the mid-1840s the acquisition of Oregon and California opened up the West Coast to American steamboat traffic. Starting in 1848 Congress subsidized the Pacific Mail Steamship Company with $199,999 to set up regular packet ship, mail, passenger, and cargo routes in the Pacific Ocean. This regular scheduled route went from Panama City, Nicaragua and Mexico to and from San Francisco and Oregon. Panama City was the Pacific terminus of the Isthmus of Panama trail across Panama. The Atlantic Ocean mail contract from East Coast cities and New Orleans to and from the Chagres River in Panama was won by the United States Mail Steamship Company whose first paddle wheel steamship, the SS Falcon (1848) was dispatched on 1 December 1848 to the Caribbean (Atlantic) terminus of the Isthmus of Panama trail—the Chagres River. The SS California (1848), the first Pacific Mail Steamship Company paddle wheel steamship, left New York City on 6 October 1848 with only a partial load of her about 60 saloon (about $300 fare) and 150 steerage (about $150 fare) passenger capacity. Only a few were going all the way to California. Her crew numbered about 36 men. She left New York well before confirmed word of the California Gold Rush had reached the East Coast. Once the California Gold Rush was confirmed by President James Polk in his State of the Union address on 5 December 1848 people started rushing to Panama City to catch the SS California. The picked up more passengers in Valparaíso, Chile and Panama City, Panama and showed up in San Francisco, loaded with about 400 passengers—twice the passengers it had been designed for—on 28 February 1849. She had left behind about another 400–600 potential passengers still looking for passage from Panama City. The SS California had made the trip from Panama and Mexico after steaming around Cape Horn from New York—see SS California (1848). The trips by paddle wheel steamship to Panama and Nicaragua from New York, Philadelphia, Boston, via New Orleans and Havana were about long and took about two weeks. Trips across the Isthmus of Panama or Nicaragua typically took about one week by native canoe and mule back. The trip to or from San Francisco to Panama City could be done by paddle wheel steamer in about three weeks. In addition to this, travel time via the Panama route typically had a two- to four-week waiting period to find a ship going from Panama City, Panama to San Francisco before 1850. It was not before 1850 that enough paddle wheel steamers were available in the Atlantic and Pacific routes to establish regularly scheduled journeys. Other steamships soon followed, and by late 1849, paddle wheel steamships like the SS McKim (1848) were carrying miners and their supplies the trip from San Francisco up the extensive Sacramento–San Joaquin River Delta to Stockton, California, Marysville, California, Sacramento, etc. to get about closer to the gold fields. Steam-powered tugboats and towboats started working in the San Francisco Bay soon after this to expedite shipping in and out of the bay. As the passenger, mail and high value freight business to and from California boomed more and more paddle steamers were brought into service—eleven by the Pacific Mail Steamship Company alone. The trip to and from California via Panama and paddle wheeled steamers could be done, if there were no waits for shipping, in about 40 days—over 100 days less than by wagon or 160 days less than a trip around Cape Horn. About 20–30% of the California Argonauts are thought to have returned to their homes, mostly on the East Coast of the United States via Panama—the fastest way home. Many returned to California after settling their business in the East with their wives, family and/or sweethearts. Most used the Panama or Nicaragua route till 1855 when the completion of the Panama Railroad made the Panama Route much easier, faster and more reliable. Between 1849 and 1869 when the first transcontinental railroad was completed across the United States about 800,000 travelers had used the Panama route. Most of the roughly $50,000,000 of gold found each year in California were shipped East via the Panama route on paddle steamers, mule trains and canoes and later the Panama Railroad across Panama. After 1855 when the Panama Railroad was completed the Panama Route was by far the quickest and easiest way to get to or from California from the East Coast of the U.S. or Europe. Most California bound merchandise still used the slower but cheaper Cape Horn sailing ship route. The sinking of the paddle steamer (the Ship of Gold) in a hurricane on 12 September 1857 and the loss of about $2 million in California gold indirectly led to the Panic of 1857. Steamboat traffic including passenger and freight business grew exponentially in the decades before the Civil War. So too did the economic and human losses inflicted by snags, shoals, boiler explosions, and human error. Civil War During the US Civil War the Battle of Hampton Roads, often referred to as either the Battle of the Monitor and Merrimack or the Battle of Ironclads, was fought over two days with steam-powered ironclad warships, 8–9 March 1862. The battle occurred in Hampton Roads, a roadstead in Virginia where the Elizabeth and Nansemond Rivers meet the James River just before it enters Chesapeake Bay adjacent to the city of Norfolk. The battle was a part of the effort of the Confederate States of America to break the Union Naval blockade, which had cut off Virginia from all international trade. The Civil War in the West was fought to control major rivers, especially the Mississippi and Tennessee Rivers using paddlewheelers. Only the Union had them (the Confederacy captured a few, but were unable to use them.) The Battle of Vicksburg involved monitors and ironclad riverboats. The USS Cairo is a survivor of the Vicksburg battle. Trade on the river was suspended for two years because of a Confederate's Mississippi blockade before the union victory at Vicksburg reopened the river on 4 July 1863. The triumph of Eads ironclads, and Farragut's seizure of New Orleans, secured the river for the Union North. Although Union forces gained control of Mississippi River tributaries, travel there was still subject to interdiction by the Confederates. The Ambush of the steamboat J. R. Williams, which was carrying supplies from Fort Smith to Fort Gibson along the Arkansas River on 16 July 1863 demonstrated this. The steamboat was destroyed, the cargo was lost, and the tiny Union escort was run off. The loss did not affect the Union war effort, however. The worst of all steamboat accidents occurred at the end of the Civil War in April 1865, when the steamboat Sultana, carrying an over-capacity load of returning Union soldiers recently freed from a Confederate prison camp, blew up, causing more than 1,700 deaths. Mississippi and Missouri river traffic For most of the 19th century and part of the early 20th century, trade on the Mississippi River was dominated by paddle-wheel steamboats. Their use generated rapid development of economies of port cities; the exploitation of agricultural and commodity products, which could be more easily transported to markets; and prosperity along the major rivers. Their success led to penetration deep into the continent, where Anson Northup in 1859 became first steamer to cross the Canada–US border on the Red River. They would also be involved in major political events, as when Louis Riel seized International at Fort Garry, or Gabriel Dumont was engaged by Northcote at Batoche. Steamboats were held in such high esteem that they could become state symbols; the Steamboat Iowa (1838) is incorporated in the Seal of Iowa because it represented speed, power, and progress. At the same time, the expanding steamboat traffic had severe adverse environmental effects, in the Middle Mississippi Valley especially, between St. Louis and the river's confluence with the Ohio. The steamboats consumed much wood for fuel, and the river floodplain and banks became deforested. This led to instability in the banks, addition of silt to the water, making the river both shallower and hence wider and causing unpredictable, lateral movement of the river channel across the wide, ten-mile floodplain, endangering navigation. Boats designated as snagpullers to keep the channels free had crews that sometimes cut remaining large trees or more back from the banks, exacerbating the problems. In the 19th century, the flooding of the Mississippi became a more severe problem than when the floodplain was filled with trees and brush. Most steamboats were destroyed by boiler explosions or fires—and many sank in the river, with some of those buried in silt as the river changed course. From 1811 to 1899, 156 steamboats were lost to snags or rocks between St. Louis and the Ohio River. Another 411 were damaged by fire, explosions or ice during that period. One of the few surviving Mississippi sternwheelers from this period, Julius C. Wilkie, was operated as a museum ship at Winona, Minnesota, until its destruction in a fire in 1981. The replacement, built in situ, was not a steamboat. The replica was scrapped in 2008. From 1844 through 1857, luxurious palace steamers carried passengers and cargo around the North American Great Lakes. Great Lakes passenger steamers reached their zenith during the century from 1850 to 1950. The is the last of the once-numerous passenger-carrying steam-powered car ferries operating on the Great Lakes. A unique style of bulk carrier known as the lake freighter was developed on the Great Lakes. The St. Marys Challenger, launched in 1906, is the oldest operating steamship in the United States. She runs a Skinner Marine Unaflow 4-cylinder reciprocating steam engine as her power plant. Women started to become steamboat captains in the late 19th century. The first woman to earn her steamboat master's license was Mary Millicent Miller, in 1884. In 1888, Callie Leach French earned her first class license. In 1892, she earned a master's license, becoming the only woman to hold both and operating on the Mississippi River. French towed a showboat up and down the rivers until 1907 and boasted that she'd never had an accident or lost a boat. Another early steamboat captain was Blanche Douglass Leathers, who earned her license in 1894. Mary Becker Greene earned her license in 1897 and along with her husband started the Greene Line. Steamboats in rivers on the west side of the Mississippi River Steamboats also operated on the Red River to Shreveport, Louisiana. In April 1815, Captain Henry Miller Shreve was the first person to bring a steamboat, the Enterprise, up the Red River. By 1839 after Captain Henry Miller Shreve broke the Great Raft log jam had been 160 miles long on the river. In the late 1830s, the steamboats in rivers on the west side of the Mississippi River were a long, wide, shallow draft vessel, lightly built with an engine on the deck. These newer steamboats could sail in just 20 inches of water. Contemporaries claimed that they could "run with a lot of heavy dew". Walking the steamboat over sandbars or away from reefs Walking the boat was a way of lifting the bow of a steamboat like on crutches, getting up and down a sandbank with poles, blocks, and strong rigging, and using paddlewheels to lift and move the ship through successive steps, on the helm. Moving of a boat from a sandbar by its own action was known as "walking the boat" and "grass-hoppering". Two long, strong poles were pushed forward from the bow on either side of the boat into the sandbar at a high degree of angle. Near the end of each pole, a block was secured with a strong rope or clamp that passed through pulleys that lowered through a pair of similar blocks attached to the deck near the bow. The end of each line went to a winch which, when turned, was taut and, with its weight on the stringers, slightly raised the bow of the boat. Activation of the forward paddlewheels and placement of the poles caused the bow of the boat to raise and move the boat forward perhaps a few feet. It was laborious and dangerous work for the crew, even with a Steam donkey driven capstan winch. Double-tripping Double-tripping means making two voyages by leaving a cargo of a steamboat ashore to lighten boats load during times of extremely low water or when ice impedes progress. The boat had to return (and therefore make a second trip) to retrieve the cargo. Piston Rings, Steel replaced cotton seals, 1854 1854: John Ramsbottom publishes a report on his use of oversized split steel piston rings which maintain a seal by outward spring tension on the cylinder wall. This improved efficiency by allowing much better sealing (compared to earlier cotton seals) which allowed significantly higher system pressures before "blow-by" is experienced. Allen Steam Engine at 3 to 5 times higher speeds, 1862 1862: The Allen steam engine (later called Porter-Allen) is exhibited at the London Exhibition. It is precision engineered and balanced allowing it to operate at from three to five times the speed of other stationary engines. The short stroke and high speed minimize condensation in the cylinder, significantly improving efficiency. The high speed allows direct coupling or the use of reduced sized pulleys and belting. Boilers, Water Tubes, Not Explosive, 1867 Triple Expansion Steam Engine, 1881 {{Timeline-event|date=|event= Alexander C. Kirk designs the first practical triple expansion engine which was installed in [[SS Aberdeen (1881)|SS Aberdeen]].}} Steam Turbine, 1884 20th century The Belle of Louisville is the oldest operating steamboat in the United States, and the oldest operating Mississippi River-style steamboat in the world. She was laid down as Idlewild in 1914, and is currently located in Louisville, Kentucky. Five major commercial steamboats currently operate on the inland waterways of the United States. The only remaining overnight cruising steamboat is the 432-passenger American Queen, which operates week-long cruises on the Mississippi, Ohio, Cumberland and Tennessee Rivers 11 months out of the year. The others are day boats: they are the steamers Chautauqua Belle at Chautauqua Lake, New York, Minne Ha-Ha at Lake George, New York, operating on Lake George; the Belle of Louisville in Louisville, Kentucky, operating on the Ohio River; and the Natchez in New Orleans, Louisiana, operating on the Mississippi River. For modern craft operated on rivers, see the Riverboat article. Canada In Canada, the city of Terrace, British Columbia, celebrates "Riverboat Days" each summer. Built on the banks of the Skeena River, the city depended on the steamboat for transportation and trade into the 20th century. The first steamer to enter the Skeena was Union in 1864. In 1866 Mumford attempted to ascend the river, but it was only able to reach the Kitsumkalum River. It was not until 1891 Hudson's Bay Company sternwheeler Caledonia successfully negotiated Kitselas Canyon and reached Hazelton. A number of other steamers were built around the turn of the 20th century, in part due to the growing fish industry and the gold rush. For more information, see Steamboats of the Skeena River. Sternwheelers were an instrumental transportation technology in the development of Western Canada. They were used on most of the navigable waterways of Manitoba, Saskatchewan, Alberta, BC (British Columbia) and the Yukon at one time or another, generally being supplanted by the expansion of railroads and roads. In the more mountainous and remote areas of the Yukon and BC, working sternwheelers lived on well into the 20th century. The simplicity of these vessels and their shallow draft made them indispensable to pioneer communities that were otherwise virtually cut off from the outside world. Because of their shallow, flat-bottomed construction (the Canadian examples of the western river sternwheeler generally needed less than three feet of water to float in), they could nose up almost anywhere along a riverbank to pick up or drop off passengers and freight. Sternwheelers would also prove vital to the construction of the railroads that eventually replaced them. They were used to haul supplies, track and other materials to construction camps. The simple, versatile, locomotive-style boilers fitted to most sternwheelers after about the 1860s could burn coal, when available in more populated areas like the lakes of the Kootenays and the Okanagan region in southern BC, or wood in the more remote areas, such as the Steamboats of the Yukon River or northern BC. The hulls were generally wooden, although iron, steel and composite hulls gradually overtook them. They were braced internally with a series of built-up longitudinal timbers called "keelsons". Further resilience was given to the hulls by a system of "hog rods" or "hog chains" that were fastened into the keelsons and led up and over vertical masts called "hog-posts", and back down again. Like their counterparts on the Mississippi and its tributaries, and the vessels on the rivers of California, Idaho, Oregon, Washington and Alaska, the Canadian sternwheelers tended to have fairly short life-spans. The hard usage they were subjected to and inherent flexibility of their shallow wooden hulls meant that relatively few of them had careers longer than a decade. In the Yukon, two vessels are preserved: the in Whitehorse and the in Dawson City. Many derelict hulks can still be found along the Yukon River. In British Columbia, the Moyie, built by the Canadian Pacific Railway (CPR) in 1898, was operated on Kootenay Lake in south-eastern BC until 1957. It has been carefully restored and is on display in the village of Kaslo, where it acts as a tourist attraction right next to information centre in downtown Kaslo. The Moyie is the world's oldest intact stern wheeler. While the SS Sicamous and SS Naramata (steam tug & icebreaker) built by the CPR at Okanagan Landing on Okanagan Lake in 1914 have been preserved in Penticton at the south end of Okanagan Lake. The SS Samson V is the only Canadian steam-powered sternwheeler that has been preserved afloat. It was built in 1937 by the Canadian federal Department of Public Works as a snagboat for clearing logs and debris out of the lower reaches of the Fraser River and for maintaining docks and aids to navigation. The fifth in a line of Fraser River snagpullers, the Samson V has engines, paddlewheel and other components that were passed down from the Samson II of 1914. It is now moored on the Fraser River as a floating museum in its home port of New Westminster, near Vancouver, BC. The oldest operating steam driven vessel in North America is the . It was built in Scotland in 1887 to cruise the Muskoka Lakes, District of Muskoka, Ontario, Canada. Originally named the S.S. Nipissing, it was converted from a side-paddle-wheel steamer with a walking-beam engine into a two-counter-rotating-propeller steamer. The first woman steamboat captain on the Columbia River was Minnie Mossman Hill, who earned her master's and pilot's license in 1887. Great Britain Engineer Robert Fourness and his cousin, physician James Ashworth are said to have had a steamboat running between Hull and Beverley, after having been granted British Patent No. 1640 of March 1788 for a "new invented machine for working, towing, expediting and facilitating the voyage of ships, sloops and barges and other vessels upon the water". James Oldham, MICE, described how well he knew those who had built the F&A steamboat in a lecture entitled "On the rise, progress and present position of steam navigation in Hull" that he gave at the 23rd Meeting of the British Association for the Advancement for Science in Hull, England on 7 September 1853. The first commercially successful steamboat in Europe, Henry Bell's Comet of 1812, started a rapid expansion of steam services on the Firth of Clyde, and within four years a steamer service was in operation on the inland Loch Lomond, a forerunner of the lake steamers still gracing Swiss lakes. On the Clyde itself, within ten years of Comet's start in 1812 there were nearly fifty steamers, and services had started across the Irish Sea to Belfast and on many British estuaries. By 1900 there were over 300 Clyde steamers. People have had a particular affection for the Clyde puffers, small steam freighters of traditional design developed to use the Scottish canals and to serve the Highlands and Islands. They were immortalised by the tales of Para Handy's boat Vital Spark by Neil Munro and by the film The Maggie, and a small number are being conserved to continue in steam around the west highland sea lochs. From 1850 to the early decades of the 20th century Windermere, in the English Lake District, was home to many elegant steam launches. They were used for private parties, watching the yacht races or, in one instance, commuting to work, via the rail connection to Barrow in Furness. Many of these fine craft were saved from destruction when steam went out of fashion and are now part of the collection at Windermere Steamboat Museum. The collection includes SL Dolly, 1850, thought to be the world's oldest mechanically powered boat, and several of the classic Windermere launches. Today the 1900 steamer still sails on Loch Katrine, while on Loch Lomond PS Maid of the Loch is being restored, and in the English Lakes the oldest operating passenger yacht, SY Gondola (built 1859, rebuilt 1979), sails daily during the summer season on Coniston Water. The paddle steamer Waverley, built in 1947, is the last survivor of these fleets, and the last seagoing paddle steamer in the world. This ship sails a full season of cruises every year from places around Britain, and has sailed across the English Channel for a visit to commemorate the sinking of her predecessor, built in 1899, at the Battle of Dunkirk in 1940. After the Clyde, the Thames estuary was the main growth area for steamboats, starting with the Margery and the Thames in 1815, which were both brought down from the Clyde. Until the arrival of railways from 1838 onwards, steamers steadily took over the role of the many sail and rowed ferries, with at least 80 ferries by 1830 with routes from London to Gravesend and Margate, and upstream to Richmond. By 1835, the Diamond Steam Packet Company, one of several popular companies, reported that it had carried over 250,000 passengers in the year. The first steamboat constructed of iron, the Aaron Manby was laid down in the Horseley Ironworks in Staffordshire in 1821 and launched at the Surrey Docks in Rotherhithe. After testing in the Thames, the boat steamed to Paris where she was used on the River Seine. Three similar iron steamers followed within a few years. There are few genuine steamboats left on the River Thames; however, a handful remain. The SL (steam launch) Nuneham is a genuine Victorian steamer built in 1898, and operated on the non-tidal upper Thames by the Thames Steam Packet Boat Company. It is berthed at Runnymede. SL Nuneham was built at Port Brimscombe on the Thames and Severn Canal by Edwin Clarke. She was built for Salter Bros at Oxford for the regular passenger service between Oxford and Kingston. The original Sissons triple-expansion steam engine was removed in the 1960s and replaced with a diesel engine. In 1972, the SL Nuneham was sold to a London boat operator and entered service on the Westminster Pier to Hampton Court service. In 1984 the boat was sold again – now practically derelict – to French Brothers Ltd at Runnymede as a restoration project. Over a number of years French Brothers carefully restored the launch to its former specification. A similar Sissons triple-expansion engine was found in a museum in America, shipped back to the UK and installed, along with a new coal-fired Scotch boiler, designed and built by Alan McEwen of Keighley, Yorkshire. The superstructure was reconstructed to the original design and elegance, including the raised roof, wood panelled saloon and open top deck. The restoration was completed in 1997 and the launch was granted an MCA passenger certificate for 106 passengers. SL Nuneham was entered back into service by French Brothers Ltd, but trading as the Thames Steam Packet Boat Company. Europe Built in 1856, PS Skibladner is the oldest steamship still in operation, serving towns along lake Mjøsa in Norway. In Denmark, steamboats were a popular means of transportation in earlier times, mostly for recreational purposes. They were deployed to carry passengers for short distances along the coastline or across larger lakes. Falling out of favour later on, some of the original boats are still in operation in a few places, such as Hjejlen. Built in 1861, this steamboat is running second to the Norwegian Skibladner as the oldest steamship in operation and sails the lake of Julsø near Silkeborg. Swiss lakes are home of a number of large steamships. On Lake Lucerne, five paddle steamers are still in service: (built in 1901, 800 passengers), (1902, 800 passengers), (1906, 900 passengers), (1913, 900 passengers, fastest paddle-wheeler on European lakes) and (1928, 1200 passengers, last steamship built for a Swiss lake). There are also five steamers as well as some old steamships converted to diesel-powered paddlewheelers on Lake Geneva, two steamers on Lake Zurich and single ones on other lakes. In Austria the paddle-wheeler (250 passengers) of 1871 vintage continues in service on Traunsee. The paddle-wheeler Hohentwiel of 1913 is the oldest running passenger ship on the Lake of Constance. In The Netherlands, a steamboat is used for the annual Sinterklaas celebration. According to tradition, Sinterklaas always arrives in the Netherlands by steamboat. The steamer in The Netherlands is called Pakjesboot 12. New Zealand The New Zealand-built 1912 steamer TSS Earnslaw still makes regular sight-seeing trips across Lake Wakatipu, an alpine lake near Queenstown. Vietnam Seeing the great potential of the steam-powered vessels, Vietnamese Emperor Minh Mạng attempted to reproduce a French-made steamboat. The first test in 1838 was a failure as the boiler was broken. The task supervisor was chained and two officials Nguyễn Trung Mậu, Ngô Kim Lân from the Ministry of Construction were jailed for false report. The project was assigned again to Hoàng Văn Lịch and Võ Huy Trinh. In the second test two months later, the engine performed greatly. The Emperor rewarded the two handsomely. He commented that although this machine could be purchased from the Westerner, it is important that his engineers and mechanics could acquaint themselves with modern machinery. Therefore no expense was too great. Encouraged by the success, Minh Mạng ordered the engineers to study and develop steam engines and steamers to equip his naval fleets. At the end of Minh Mạng 's reign there were 3 steamers produced named Yến Phi, Vân Phi and Vụ Phi. However, his successor could not maintain the industry due to financial problems, worsened by many years of social unrest under his rule. Images
Technology
Naval transport
null
178702
https://en.wikipedia.org/wiki/Pound%20%28force%29
Pound (force)
The pound of force or pound-force (symbol: lbf, sometimes lbf,) is a unit of force used in some systems of measurement, including English Engineering units and the foot–pound–second system. Pound-force should not be confused with pound-mass (lb), often simply called "pound", which is a unit of mass; nor should these be confused with foot-pound (ft⋅lbf), a unit of energy, or pound-foot (lbf⋅ft), a unit of torque. Definitions The pound-force is equal to the gravitational force exerted on a mass of one avoirdupois pound on the surface of Earth. Since the 18th century, the unit has been used in low-precision measurements, for which small changes in Earth's gravity (which varies from equator to pole by up to half a percent) can safely be neglected. The 20th century, however, brought the need for a more precise definition, requiring a standardized value for acceleration due to gravity. Product of avoirdupois pound and standard gravity The pound-force is the product of one avoirdupois pound (exactly ) and the standard acceleration due to gravity, approximately . The standard values of acceleration of the standard gravitational field (gn) and the international avoirdupois pound (lb) result in a pound-force equal to (). This definition can be rephrased in terms of the slug. A slug has a mass of 32.174049 lb. A pound-force is the amount of force required to accelerate a slug at a rate of , so: Conversion to other units Foot–pound–second (FPS) systems of units In some contexts, the term "pound" is used almost exclusively to refer to the unit of force and not the unit of mass. In those applications, the preferred unit of mass is the slug, i.e. lbf⋅s2/ft. In other contexts, the unit "pound" refers to a unit of mass. The international standard symbol for the pound as a unit of mass is lb. In the "engineering" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Note, however, unlike the other systems the force unit is not equal to the mass unit multiplied by the acceleration unit—the use of Newton's second law, , requires another factor, gc, usually taken to be 32.174049 (lb⋅ft)/(lbf⋅s2). "Absolute" systems are coherent systems of units: by using the slug as the unit of mass, the "gravitational" FPS system (left column) avoids the need for such a constant. The SI is an "absolute" metric system with kilogram and meter as base units. Pound of thrust The term pound of thrust is an alternative name for pound-force in specific contexts. It is frequently seen in US sources on jet engines and rocketry, some of which continue to use the FPS notation. For example, the thrust produced by each of the Space Shuttle's two Solid Rocket Boosters was , together .
Physical sciences
Force
Basics and measurement
178769
https://en.wikipedia.org/wiki/Intravenous%20therapy
Intravenous therapy
Intravenous therapy (abbreviated as IV therapy) is a medical technique that administers fluids, medications and nutrients directly into a person's vein. The intravenous route of administration is commonly used for rehydration or to provide nutrients for those who cannot, or will not—due to reduced mental states or otherwise—consume food or water by mouth. It may also be used to administer medications or other medical therapy such as blood products or electrolytes to correct electrolyte imbalances. Attempts at providing intravenous therapy have been recorded as early as the 1400s, but the practice did not become widespread until the 1900s after the development of techniques for safe, effective use. The intravenous route is the fastest way to deliver medications and fluid replacement throughout the body as they are introduced directly into the circulatory system and thus quickly distributed. For this reason, the intravenous route of administration is also used for the consumption of some recreational drugs. Many therapies are administered as a "bolus" or one-time dose, but they may also be administered as an extended infusion or drip. The act of administering a therapy intravenously, or placing an intravenous line ("IV line") for later use, is a procedure which should only be performed by a skilled professional. The most basic intravenous access consists of a needle piercing the skin and entering a vein which is connected to a syringe or to external tubing. This is used to administer the desired therapy. In cases where a patient is likely to receive many such interventions in a short period (with consequent risk of trauma to the vein), normal practice is to insert a cannula which leaves one end in the vein, and subsequent therapies can be administered easily through tubing at the other end. In some cases, multiple medications or therapies are administered through the same IV line. IV lines are classified as "central lines" if they end in a large vein close to the heart, or as "peripheral lines" if their output is to a small vein in the periphery, such as the arm. An IV line can be threaded through a peripheral vein to end near the heart, which is termed a "peripherally inserted central catheter" or PICC line. If a person is likely to need long-term intravenous therapy, a medical port may be implanted to enable easier repeated access to the vein without having to pierce the vein repeatedly. A catheter can also be inserted into a central vein through the chest, which is known as a tunneled line. The specific type of catheter used and site of insertion are affected by the desired substance to be administered and the health of the veins in the desired site of insertion. Placement of an IV line may cause pain, as it necessarily involves piercing the skin. Infections and inflammation (termed phlebitis) are also both common side effects of an IV line. Phlebitis may be more likely if the same vein is used repeatedly for intravenous access, and can eventually develop into a hard cord which is unsuitable for IV access. The unintentional administration of a therapy outside a vein, termed extravasation or infiltration, may cause other side effects. Uses Medical uses Intravenous (IV) access is used to administer medications and fluid replacement which must be distributed throughout the body, especially when rapid distribution is desired. Another use of IV administration is the avoidance of first-pass metabolism in the liver. Substances that may be infused intravenously include volume expanders, blood-based products, blood substitutes, medications and nutrition. Fluid solutions Fluids may be administered as part of "volume expansion", or fluid replacement, through the intravenous route. Volume expansion consists of the administration of fluid-based solutions or suspensions designed to target specific areas of the body which need more water. There are two main types of volume expander: crystalloids and colloids. Crystalloids are aqueous solutions of mineral salts or other water-soluble molecules. Colloids contain larger insoluble molecules, such as gelatin. Blood itself is considered a colloid. The most commonly used crystalloid fluid is normal saline, a solution of sodium chloride at 0.9% concentration, which is isotonic with blood. Lactated Ringer's (also known as Ringer's lactate) and the closely related Ringer's acetate, are mildly hypotonic solutions often used in those who have significant burns. Colloids preserve a high colloid osmotic pressure in the blood, while, on the other hand, this parameter is decreased by crystalloids due to hemodilution. Crystalloids generally are much cheaper than colloids. Buffer solutions which are used to correct acidosis or alkalosis are also administered through intravenous access. Lactated Ringer's solution used as a fluid expander or base solution to which medications are added also has some buffering effect. Another solution administered intravenously as a buffering solution is sodium bicarbonate. Medication and treatment Medications may be mixed into the fluids mentioned above, commonly normal saline, or dextrose solutions. Compared with other routes of administration, such as oral medications, the IV route is the fastest way to deliver fluids and medications throughout the body. For this reason, the IV route is commonly preferred in emergency situations or when a fast onset of action is desirable. In extremely high blood pressure (termed a hypertensive emergency), IV antihypertensives may be given to quickly decrease the blood pressure in a controlled manner to prevent organ damage. In atrial fibrillation, IV amiodarone may be administered to attempt to restore normal heart rhythm. IV medications can also be used for chronic health conditions such as cancer, for which chemotherapy drugs are commonly administered intravenously. In some cases, such as with vancomycin, a loading or bolus dose of medicine is given before beginning a dosing regimen to more quickly increase the concentration of medication in the blood. The bioavailability of an IV medication is by definition 100%, unlike oral administration where medication may not be fully absorbed, or may be metabolized prior to entering the bloodstream. For some medications, there is virtually zero oral bioavailability. For this reason certain types of medications can only be given intravenously, as there is insufficient uptake by other routes of administration, such is the case of severe dehydration where the patient is required to be treated via IV therapy for a quick recovery. The unpredictability of oral bioavailability in different people is also a reason for a medication to be administered IV, as with furosemide. Oral medications also may be less desirable if a person is nauseous or vomiting, or has severe diarrhea, as these may prevent the medicine from being fully absorbed from the gastrointestinal tract. In these cases, a medication may be given IV only until the patient can tolerate an oral form of the medication. The switch from IV to oral administration is usually performed as soon as viable, as there is generally cost and time savings over IV administration. Whether a medication can be potentially switched to an oral form is sometimes considered when choosing appropriate antibiotic therapy for use in a hospital setting, as a person is unlikely to be discharged if they still require IV therapy. Some medications, such as aprepitant, are chemically modified to be better suited for IV administration, forming a prodrug such as fosaprepitant. This can be for pharmacokinetic reasons or to delay the effect of the drug until it can be metabolized into the active form. Blood products A blood product (or blood-based product) is any component of blood which is collected from a donor for use in a blood transfusion. Blood transfusions can be used in massive blood loss due to trauma, or can be used to replace blood lost during surgery. Blood transfusions may also be used to treat a severe anaemia or thrombocytopenia caused by a blood disease. Early blood transfusions consisted of whole blood, but modern medical practice commonly uses only components of the blood, such as packed red blood cells, fresh frozen plasma or cryoprecipitate. Nutrition Parenteral nutrition is the act of providing required nutrients to a person through an intravenous line. This is used in people who are unable to get nutrients normally, by eating and digesting food. A person receiving parenteral nutrition will be given an intravenous solution which may contain salts, dextrose, amino acids, lipids and vitamins. The exact formulation of a parenteral nutrition used will depend on the specific nutritional needs of the person it is being given to. If a person is only receiving nutrition intravenously, it is called total parenteral nutrition (TPN), whereas if a person is only receiving some of their nutrition intravenously it is called partial parenteral nutrition (or supplemental parenteral nutrition). Imaging Medical imaging relies on being able to clearly distinguish internal parts of the body from each other. One way this is accomplished is through the administration of a contrast agent into a vein. The specific imaging technique being employed will determine the characteristics of an appropriate contrast agent to increase visibility of blood vessels or other features. Common contrast agents are administered into a peripheral vein from which they are distributed throughout the circulation to the imaging site. Other uses Use in sports IV rehydration was formerly a common technique for athletes. The World Anti-Doping Agency prohibits intravenous injection of more than 100 mL per 12 hours, except under a medical exemption. The United States Anti-Doping Agency notes that, as well as the dangers inherent in IV therapy, "IVs can be used to change blood test results (such as hematocrit where EPO or blood doping is being used), mask urine test results (by dilution) or by administering prohibited substances in a way that will more quickly be cleared from the body in order to beat an anti-doping test". Players suspended after attending "boutique IV clinics" which offer this sort of treatment include footballer Samir Nasri in 2017 and swimmer Ryan Lochte in 2018. Use for hangover treatment In the 1960s, John Myers developed the "Myers' cocktail", a non-prescription IV solution of vitamins and minerals marketed as a hangover cure and general wellness remedy. The first "boutique IV" clinic, offering similar treatments, opened in Tokyo in 2008. These clinics, whose target market was described by Elle as "health nuts who moonlight as heavy drinkers", have been publicized in the 2010s by glamorous celebrity customers. Intravenous therapy is also used in people with acute ethanol toxicity to correct electrolyte and vitamin deficiencies which arise from alcohol consumption. Others In some countries, non-prescription intravenous glucose is used to improve a person's energy, but is not a part of routine medical care in countries such as the United States where glucose solutions are prescription drugs. Improperly administered intravenous glucose (called "ringer" ), such as that which is administered clandestinely in store-front clinics, poses increased risks due to improper technique and oversight. Intravenous access is also sometimes used outside of a medical setting for the self-administration of recreational drugs, such as heroin and fentanyl, cocaine, methamphetamine, DMT, and others. Intravenous therapy is also used for veterinary patient management. Types Bolus Some medications can be administered as a bolus dose, which is called an "IV push". A syringe containing the medication is connected to an access port in the primary tubing and the medication is administered through the port. A bolus may be administered rapidly (with a fast depression of the syringe plunger) or may be administered slowly, over the course of a few minutes. The exact administration technique depends on the medication and other factors. In some cases, a bolus of plain IV solution (i.e. without medication added) is administered immediately after the bolus to further force the medicine into the bloodstream. This procedure is termed an "IV flush". Certain medications, such as potassium, are not able to be administered by IV push due to the extremely rapid onset of action and high level of effects. Infusion An infusion of medication may be used when it is desirable to have a constant blood concentration of a medication over time, such as with some antibiotics including beta-lactams. Continuous infusions, where the next infusion is begun immediately following the completion of the prior, may also be used to limit variation in drug concentration in the blood (i.e. between the peak drug levels and the trough drug levels). They may also be used instead of intermittent bolus injections for the same reason, such as with furosemide. Infusions can also be intermittent, in which case the medication is administered over a period of time, then stopped, and this is later repeated. Intermittent infusion may be used when there are concerns about the stability of medicine in solution for long periods of time (as is common with continuous infusions), or to enable the administration of medicines which would be incompatible if administered at the same time in the same IV line, for example vancomycin. Failure to properly calculate and administer an infusion can result in adverse effects, termed infusion reactions. For this reason, many medications have a maximum recommended infusion rate, such as vancomycin and many monoclonal antibodies. These infusion reactions can be severe, such as in the case of vancomycin, where the reaction is termed "red man syndrome". Secondary Any additional medication to be administered intravenously at the same time as an infusion may be connected to the primary tubing; this is termed a secondary IV, or IV piggyback. This prevents the need for multiple IV access lines on the same person. When administering a secondary IV medication, the primary bag is held lower than the secondary bag so that the secondary medication can flow into the primary tubing, rather than fluid from the primary bag flowing into the secondary tubing. The fluid from the primary bag is needed to help flush any remaining medication from the secondary IV from the tubing. If a bolus or secondary infusion is intended for administration in the same line as a primary infusion, the molecular compatibility of the solutions must be considered. Secondary compatibility is generally referred to as "y-site compatibility", named after the shape of the tubing which has a port for bolus administration. Incompatibility of two fluids or medications can arise due to issues of molecular stability, changes in solubility, or degradation of one of the medications. Methods and equipment Access The simplest form of intravenous access is by passing a hollow needle through the skin directly into a vein. A syringe can be connected directly to this needle, which allows for a "bolus" dose to be administered. Alternatively, the needle may be placed and then connected to a length of tubing, allowing for an infusion to be administered. The type and location of venous access (i.e. a central line versus peripheral line, and in which vein the line is placed) can be affected by the potential for some medications to cause peripheral vasoconstriction, which limits circulation to peripheral veins. A peripheral cannula is the most common intravenous access method utilized in hospitals, pre-hospital care, and outpatient medicine. This may be placed in the arm, commonly either the wrist or the median cubital vein at the elbow. A tourniquet may be used to restrict the venous drainage of the limb and make the vein bulge, making it easier to locate and place a line in a vein. When used, a tourniquet should be removed before injecting medication to prevent extravasation. The part of the catheter that remains outside the skin is called the connecting hub; it can be connected to a syringe or an intravenous infusion line, or capped with a or saline lock, a needleless connection filled with a small amount of heparin or saline solution to prevent clotting, between uses of the catheter. Ported cannulae have an injection port on the top that is often used to administer medicine. The thickness and size of needles and catheters can be given in Birmingham gauge or French gauge. A Birmingham gauge of 14 is a very large cannula (used in resuscitation settings) and 24-26 is the smallest. The most common sizes are 16-gauge (midsize line used for blood donation and transfusion), 18- and 20-gauge (all-purpose line for infusions and blood draws), and 22-gauge (all-purpose pediatric line). 12- and 14-gauge peripheral lines are capable of delivering large volumes of fluid very fast, accounting for their popularity in emergency medicine. These lines are frequently called "large bores" or "trauma lines". Peripheral lines A peripheral intravenous line is inserted in peripheral veins, such as the veins in the arms, hands, legs and feet. Medication administered in this way travels through the veins to the heart, from where it is distributed to the rest of the body through the circulatory system. The size of the peripheral vein limits the amount and rate of medication which can be administered safely. A peripheral line consists of a short catheter inserted through the skin into a peripheral vein. This is usually in the form of a cannula-over-needle device, in which a flexible plastic cannula comes mounted over a metal trocar. Once the tip of the needle and cannula are placed, the cannula is advanced inside the vein over the trocar to the appropriate position and secured. The trocar is then withdrawn and discarded. Blood samples may also be drawn from the line directly after the initial IV cannula insertion. Central lines A central line is an access method in which a catheter empties into a larger, more central vein (a vein within the torso), usually the superior vena cava, inferior vena cava or the right atrium of the heart. There are several types of central IV access, categorized based on the route the catheter takes from the outside of the body to the central vein output. Peripherally inserted central catheter A peripherally inserted central catheter (also called a PICC line) is a type of central IV access which consists of a cannula inserted through a sheath into a peripheral vein and then carefully fed towards the heart, terminating at the superior vena cava or the right atrium. These lines are usually placed in peripheral veins in the arm, and may be placed using the Seldinger technique under ultrasound guidance. An X-ray is used to verify that the end of the cannula is in the right place if fluoroscopy was not used during the insertion. An EKG can also be used in some cases to determine if the end of the cannula is in the correct location. Tunneled lines A tunneled line is a type of central access which is inserted under the skin, and then travels a significant distance through surrounding tissue before reaching and penetrating the central vein. Using a tunneled line reduces the risk of infection as compared to other forms of access, as bacteria from the skin surface are not able to travel directly into the vein. These catheters are often made of materials that resist infection and clotting. Types of tunneled central lines include the Hickman line or Broviac catheter. A tunnelled line is an option for long term venous access necessary for hemodialysis in people with poor kidney function. Implantable ports An implanted port is a central line that does not have an external connector protruding from the skin for administration of medication. Instead, a port consists of a small reservoir covered with silicone rubber which is implanted under the skin, which then covers the reservoir. Medication is administered by injecting medication through the skin and the silicone port cover into the reservoir. When the needle is withdrawn, the reservoir cover reseals itself. A port cover is designed to function for hundreds of needle sticks during its lifetime. Ports may be placed in an arm or in the chest area. Infusions Equipment used to place and administer an IV line for infusion consists of a bag, usually hanging above the height of the person, and sterile tubing through which the medicine is administered. In a basic "gravity" IV, a bag is simply hung above the height of the person and the solution is pulled via gravity through a tube attached to a needle inserted into a vein. Without extra equipment, it is not possible to precisely control the rate of administration. For this reason, a setup may also incorporate a clamp to regulate flow. Some IV lines may be placed with "Y-sites", devices which enable a secondary solution to be administered through the same line (known as piggybacking). Some systems employ a drip chamber, which prevents air from entering the bloodstream (causing an air embolism), and allows visual estimation of flow rate of the solution. Alternatively, an infusion pump allows precise control over the flow rate and total amount delivered. A pump is programmed based on the number and size of infusions being administered to ensure all medicine is fully administered without allowing the access line to run dry. Pumps are primarily utilized when a constant flow rate is important, or where changes in rate of administration would have consequences. Techniques To reduce pain associated with the procedure, medical staff may apply a topical local anaesthetic (such as EMLA or Ametop) to the skin of the chosen venipuncture area about 45 minutes beforehand. If the cannula is not inserted correctly, or the vein is particularly fragile and ruptures, blood may extravasate into the surrounding tissues; this situation is known as a blown vein or "tissuing". Using this cannula to administer medications causes extravasation of the drug, which can lead to edema, causing pain and tissue damage, and even necrosis depending on the medication. The person attempting to obtain the access must find a new access site proximal to the "blown" area to prevent extravasation of medications through the damaged vein. For this reason it is advisable to site the first cannula at the most distal appropriate vein. Adverse effects Pain Placement of an intravenous line inherently causes pain when the skin is broken and is considered medically invasive. For this reason, when other forms of administration may suffice, intravenous therapy is usually not preferred. This includes the treatment of mild or moderate dehydration with oral rehydration therapy which is an option, as opposed to parenteral rehydration through an IV line. Children in emergency departments being treated for dehydration have better outcomes with oral treatment than intravenous therapy due to the pain and complications of an intravenous line. Cold spray may decrease the pain of putting in an IV. Certain medications also have specific sensations of pain associated with their administration IV. This includes potassium, which when administered IV can cause a burning or painful sensation. The incidence of side effects specific to a medication can be affected by the type of access (peripheral versus central), the rate of administration, or the quantity of drug administered. When medications are administered too rapidly through an IV line, a set of vague symptoms such as redness or rash, fever, and others may occur; this is termed an "infusion reaction" and is prevented by decreasing the rate of administration of the medication. When vancomycin is involved, this is commonly termed "Red Man syndrome" after the rapid flushing which occurs after rapid administration. Infection and inflammation As placement of an intravenous line requires breaking the skin, there is a risk of infection. Skin-dwelling organisms such as coagulase-negative staphylococcus or Candida albicans may enter through the insertion site around the catheter, or bacteria may be accidentally introduced inside the catheter from contaminated equipment. Infection of an IV access site is usually local, causing easily visible swelling, redness, and fever. However, pathogens may also enter the bloodstream, causing sepsis, which can be sudden and life-threatening. A central IV line poses a higher risk of sepsis, as it can deliver bacteria directly into the central circulation. A line which has been in place for a longer period of time also increases the risk of infection. Inflammation of the vein may also occur, called thrombophlebitis or simply phlebitis. This may be caused by infection, the catheter itself, or the specific fluids or medication being given. Repeated instances of phlebitis can cause scar tissue to build up along a vein. A peripheral IV line cannot be left in the vein indefinitely out of concern for the risk of infection and phlebitis, among other potential complications. However, recent studies have found that there is no increased risk of complications in those whose IVs were replaced only when clinically indicated versus those whose IVs were replaced routinely. If placed with proper aseptic technique, it is not recommended to change a peripheral IV line more frequently than every 72–96 hours. Phlebitis is particularly common in intravenous drug users, and those undergoing chemotherapy, whose veins can become sclerotic and difficult to access over time, sometimes forming a hard, painful "venous cord". The presence of a cord is a cause of discomfort and pain associated with IV therapy, and makes it more difficult for an IV line to be placed as a line cannot be placed in an area with a cord. Infiltration and extravasation Infiltration occurs when a non-vesicant IV fluid or medication enters the surrounding tissue as opposed to the desired vein. It may occur when the vein itself ruptures, when the vein is damaged during insertion of the intravascular access device, or from increased vein porosity. Infiltration may also occur if the puncture of the vein by the needle becomes the path of least resistance—such as a cannula which has been left inserted, causing the vein to scar. It can also occur upon insertion of an IV line if a tourniquet is not promptly removed. Infiltration is characterized by coolness and pallor to the skin as well as localized swelling or edema. It is treated by removing the intravenous line and elevating the affected limb so the collected fluids drain away. Injections of hyaluronidase around the area can be used to speed the dispersal of the fluid/drug. Infiltration is one of the most common adverse effects of IV therapy and is usually not serious unless the infiltrated fluid is a medication damaging to the surrounding tissue, most commonly a vesicant or chemotherapeutic agent. In such cases, the infiltration is termed extravasation, and may cause necrosis. Others If the solutions administered are colder than the temperature of the body, induced hypothermia can occur. If the temperature change to the heart is rapid, ventricular fibrillation may result. Furthermore, if a solution which is not balanced in concentration is administered, a person's electrolytes may become imbalanced. In hospitals, regular blood tests may be used to proactively monitor electrolyte levels. History Discovery and development The first recorded attempt at administering a therapeutic substance via IV injection was in 1492, when Pope Innocent VIII fell ill and was administered blood from healthy individuals. If this occurred, the treatment did not work and resulted in the death of the donors while not healing the pope. This story is disputed by some, who claim that the idea of blood transfusions could not have been considered by the medical professionals at the time, or that a complete description of blood circulation was not published until over 100 years later. The story is attributed to potential errors in translation of documents from the time, as well as potentially an intentional fabrication, whereas others still consider it to be accurate. One of the leading medical history textbooks for medical and nursing students has claimed that the entire story was an anti-semitic fabrication. In 1656 Sir Christopher Wren and Robert Boyle worked on the subject. As stated by Wren, "I Have Injected Wine and Ale in a liveing Dog into the Mass of Blood by a Veine, in good Quantities, till I have made him extremely drunk, but soon after he Pisseth it out." The dog survived, grew fat, and was later stolen from his owner. Boyle attributed authorship to Wren. Richard Lower showed it was possible for blood to be transfused from animal to animal and from animal to man intravenously, a xenotransfusion. He worked with Edmund King to transfuse sheep's blood into a man who was mentally ill. Lower was interested in advancing science but also believed the man could be helped, either by the infusion of fresh blood or by the removal of old blood. It was difficult to find people who would agree to be transfused, but an eccentric scholar, Arthur Coga, consented and the procedure was carried out by Lower and King before the Royal Society on 23 November 1667. Transfusion gathered some popularity in France and Italy, but medical and theological debates arose, resulting in transfusion being prohibited in France. There was virtually no recorded success with any attempts at injection therapy until the 1800s, when in 1831 Thomas Latta studied the use of IV fluid replacements for cholera treatment. The first solutions which saw widespread use for IV injections were simple "saline-like solutions", which were followed by experiments with various other liquids, including milk, sugar, honey, and egg yolk. In the 1830s, James Blundell, an English obstetrician, used intravenous administration of blood to treat women bleeding profusely during or after delivery. This predated the understanding of blood type, leading to unpredictable results. Modern usage Intravenous therapy was expanded by Italian physician Guido Baccelli in the late 1890s and further developed in the 1930s by Samuel Hirschfeld, Harold T. Hyman and Justine Johnstone Wanger but was not widely available until the 1950s. There was a time, roughly the 1910s–1920s, when fluid replacement that today would be done intravenously was likelier to be done with a Murphy drip, a rectal infusion; and IV therapy took years to increasingly displace that route. In the 1960s, the concept of providing a person's complete nutritional needs through an IV solution began to be seriously considered. The first parenteral nutrition supplementation consisted of hydrolyzed proteins and dextrose. This was followed in 1975 with the introduction of intravenous fat emulsions and vitamins which were added to form "total parenteral nutrition", or that which includes protein, fat, and carbohydrates.
Biology and health sciences
Treatments
Health
178953
https://en.wikipedia.org/wiki/Tree-kangaroo
Tree-kangaroo
Tree-kangaroos are marsupials of the genus Dendrolagus, adapted for arboreal locomotion. They inhabit the tropical rainforests of New Guinea and far northeastern Queensland, Australia along with some of the islands in the region. All tree-kangaroos are considered threatened due to hunting and habitat destruction. They are the only true arboreal macropods. Evolutionary history The evolutionary history of tree-kangaroos possibly begins with a rainforest floor-dwelling pademelon-like ancestor. This ancestor possibly evolved from an arboreal possum-like ancestor as is suspected of all macropodid marsupials in Australia and New Guinea. During the late Eocene, the Australian/New Guinean continent began a period of drying that caused a retreat in the area of rainforest, which forced the ancestral pademelons to begin living in a drier, rockier environment. After some generations of adaptation to the new environment, the pademelons may have evolved into rock-wallabies (Petrogale spp.), which developed a generalist feeding strategy due to their dependence on a diverse assortment of vegetation refuges. This generalist strategy allowed the rock-wallabies to easily adapt to Malesian rainforest types that were introduced to Australia from Asia during the mid-Miocene. The rock-wallabies that migrated into these introduced forests adapted to spend more time climbing trees. One species in particular, the Proserpine rock-wallaby (Petrogale persephone), displays equal preference for climbing trees as for living in rocky outcrops. During the Late Miocene, the semi-arboreal rock-wallabies could have evolved into the now extinct tree-kangaroo genus Bohra. Global cooling during the Pleistocene caused continent-wide drying and rainforest retractions in Australia and New Guinea. The rainforest contractions isolated populations of Bohra which resulted in the evolution of today's tree-kangaroos (Dendrolagus spp.), as they adapted to lifestyles in geographically small and diverse rainforest fragments, and became further specialized for a canopy-dwelling lifestyle. Taxonomy Species These species are assigned to the genus Dendrolagus: Seri's tree-kangaroo (Dendrolagus stellarum) has been described as a subspecies of Doria's tree-kangaroo (D. dorianus stellarum), but some recent authorities have treated it as a separate species based on its absolute diagnostability. The Wondiwoi tree-kangaroo is among the 25 "most wanted lost" species that are the focus of Global Wildlife Conservation's "Search for Lost Species" initiative. The extinct species D. noibano from the Pleistocene of Chimbu Province, Papua New Guinea is substantially larger than living species. However, it has since been suggested to be a larger extinct form of Doria's tree-kangaroo. The case for the golden-mantled tree-kangaroo (D. pulcherrimus) is comparable to that of D. stellarum; it was first described as a subspecies of D. goodfellowi, though recent authorities have elevated it to species status based on its absolute diagnostability. A population of the tenkile (Scott's tree-kangaroo) recently discovered from the Bewani Mountains may represent an undescribed subspecies. Distribution and habitat Tree-kangaroos inhabit the tropical rainforests of New Guinea, far northeastern Australia, and some of the islands in the region, in particular, the Schouten Islands and the Raja Ampat Islands. Although most species are found in mountainous areas, several also occur in lowlands, such as the aptly named lowlands tree-kangaroo. Most tree-kangaroos are considered threatened due to hunting and habitat destruction. Because much of their lifestyle involves climbing and jumping between trees, they have evolved an appropriate method of locomotion. Tree-kangaroos thrive in the treetops, as opposed to terrestrial kangaroos which survive on mainland Australia. Two species of tree-kangaroos are found in Australia, Bennett's (D. bennetianus), which is found north of the Daintree River and Lumholtz's (D. lumholtzi). Tree-kangaroos have adapted better to regions of high altitudes. Tree-kangaroos must find places comfortable and well-adapted for breeding, as they only give birth to one joey per year. They are known to have one of the most relaxed and leisurely birthing seasons. They breed cautiously in the treetops during the monsoon season. Their habitats are breeding grounds for danger, as they can easily fall prey to their natural predator, the amethystine python, which also climbs and lives in the treetops. Tree-kangaroos are known to be able to live in both mountainous regions and lowland locations. Description Lumholtz's tree-kangaroo is the smallest of all tree-kangaroos. Its body and head length ranges about , and its tail, , with males weighing an average of 7.2 kg (16 lb) and females 5.9 kg (13 lb). The length of Doria's tree-kangaroo is , with a long tail, and weighs . Matschie's tree-kangaroo has a body and head length of 81 cm (20 to 32 inches), adult males weigh 9–11 kg (20-25 lb) and adult females weigh 7–9 kg (15-20 lb). The grizzled tree-kangaroo grows to a length of 75–90 cm (30 to 35 in), with males being considerably larger than females, and its weight is 8–15 kg (18-33 lb). Tree-kangaroos have several adaptations to an arboreal life-style. Compared to terrestrial kangaroos, tree-kangaroos have longer and broader hind feet with longer, curved nails. They also have a sponge-like grip on their paws and soles of their feet. Tree-kangaroos have a much larger and more pendulous tail than terrestrial kangaroos, giving them enhanced balance while moving about the trees. Locomotion on the ground is by hopping, as with true kangaroos. Like terrestrial kangaroos, tree-kangaroos do not sweat to cool their bodies, rather, they lick their forearms and allow the moisture to evaporate in an adaptive form of behavioural thermoregulation. Behaviour Locomotion Tree-kangaroos are slow and clumsy on the ground. They move at approximately human walking pace and hop awkwardly, leaning their body far forward to balance the heavy tail. However, in trees, they are bold and agile. They climb by wrapping their forelimbs around the trunk of a tree and, while allowing the forelimbs to slide, hop up the tree using their powerful hind legs. They are expert leapers; downward jumps from one tree to another have been recorded and they have the extraordinary ability to jump to the ground from or more without being hurt. Diet The main diet of the tree-kangaroo is leaves and fruit that it gathers from the trees, but occasionally scavenged from the ground. Tree-kangaroos will also eat grains, flowers, various nuts, sap and tree bark. Some captive tree-kangaroos (perhaps limited to New Guinea species) eat protein foods such as eggs, birds and snakes, making them omnivores. Reproduction Little is known about the reproduction of tree-kangaroos in the wild. The only published data are from captive individuals. Female tree-kangaroos reach sexual maturity as early as 2.04 years of age and males at 4.6 years. The female's fertile period is estimated to be approximately two months. They have one of the longest marsupial offspring development/maturation periods; pouch life for the young is 246–275 days long and weaning occurs 87–240 days later. Threats The two most significant threats to tree-kangaroos are habitat loss and hunting. Tree-kangaroo habitats are being destroyed or replaced by logging and timber production, along with coffee, rice and wheat production. This habitat loss can make tree-kangaroos more exposed to predators, such as feral domestic dogs. Being hunted by local community members also contributes markedly to the declines in tree-kangaroo populations. Research conducted on Lumholtz's tree-kangaroo, a species that dwells in the rain forests of northeastern Australia, determined the frequency of causes of death. This showed that of 27 deceased tree-kangaroos, 11 had been killed by vehicles, six by dogs, four by parasites and the remaining six died from other causes. Captivity As of 2021 five of the species are held in captivity. These include populations of Goodfellow's (D. goodfellowi) and Matschie's (D. matschiei), with smaller numbers of Lumholtz's (D. lumholtzi), Grizzled (D. inustus), and Doria's (D. dorianus) tree kangaroos. These are being kept in a variety of facilities across North America, Oceania, and Europe, with smaller holdings in Asia. The World Association of Zoos and Aquariums coordinates with regional zoological associations to ensure the coordination of breeding programs to maintain viable breeding populations and genetic diversity outside of the wild populations. In November 2014 at the Adelaide Zoo, an orphaned tree-kangaroo joey was transferred to the pouch of a yellow-footed rock-wallaby when his mother was killed by a falling branch. The joey survived, having been successfully reared by the surrogate mother rock-wallaby. On April 29, 2022, the Bronx Zoo announced the birth of a Matschie's tree kangaroo joey, the first of its species born at the zoo since 2008. The joey was the size of a human thumbnail at birth. Gallery
Biology and health sciences
Diprotodontia
Animals
179197
https://en.wikipedia.org/wiki/Desmostylia
Desmostylia
The Desmostylia (from Greek δεσμά desma, "bundle", and στῦλος stylos, "pillar") are an extinct order of aquatic mammals native to the North Pacific from the early Oligocene (Rupelian) to the late Miocene (Tortonian) (). Desmostylians are the only known extinct order of marine mammals. The Desmostylia, together with Sirenia and Proboscidea (and possibly Embrithopoda), have traditionally been assigned to the afrotherian clade Tethytheria, a group named after the paleoocean Tethys around which they originally evolved. The relationship between the Desmostylia and the other orders within the Tethytheria has been disputed; if the common ancestor of all tethytheres was semiaquatic, the Proboscidea became secondarily terrestrial; alternatively, the Desmostylia and Sirenia could have evolved independently into aquatic mammals. The assignment of Desmostylia to Afrotheria has always been problematic from a biogeographic standpoint, given that Africa was the locus of the early evolution of the Afrotheria while the Desmostylia have only been found along the Pacific Rim. That assignment has been seriously undermined by a 2014 cladistic analysis that places anthracobunids and desmostylians, two major groups of putative non-African afrotheres, close to each other within the laurasiatherian order Perissodactyla. However, a subsequent study shows that, while anthracobunids are definite perissodactyls, desmostylians share the same number of characteristics necessary for either Paenungulata or Perissodactyla, making their former assessment as afrotheres a possibility. Description Desmostylians were large, fully aquatic quadrupeds with massive limbs and short tails. The smallest is Ashoroa laticosta, a relatively large animal at a body length of , while the largest species reached sizes comparable to Steller's sea cow. A desmostylian skull has an elongated and broadened rostrum, with the nasal opening located slightly dorsally. The zygomatic arches are prominent (behind the eyes), the paroccipital processes elongated (downward-pointing processes behind the jaw-joints), and the epitympanic sinuses open into the temporal fossae (cavities above the ear holes). The mandible and maxilla typically have forward-pointing incisors and canine tusks, followed by a long postcanine diastema, partly because of the reduced number of premolars. The cusps of the premolars and molars are composed of densely packed cylinders of thick enamel, giving the order its name ("bundle of columns"). The primitive dental formula is 3.1.4.3, with a trilobate fourth deciduous premolar. The cheek teeth are brachydont and bunodont in primitive genera, but hypsodont in later genera such as Desmostylus, which has many supernumerary cusps. In the postcrania, the clavicle is absent and the sternum consists of a series of heavy, paired, plate-like sternebrae. In adults, the joints between the radius and ulna prevent any movements. The metacarpals are longer than metatarsals, and each foot has four digits (digit I is vestigial). Behaviour Their dental and skeletal forms suggest that desmostylians were aquatic herbivores dependent on littoral habitats. Their name refers to their highly distinctive molars, in which each cusp was modified into hollow columns, so a typical molar would have resembled a cluster of pipes, or in the case of worn molars, volcanoes. (This may reflect the close relationship between the Paenungulata, to which this group has been assigned, and the Tubulidentata.) Desmostylus did not chew or eat like any other known animal. It clenched its teeth, rooted up plants with the help of tusks and powerful neck, and then sucked them in using strong throat muscles and the shape of the roof of the mouth. Desmostylians are believed to be aquatic because of a combination of characteristics. Their legs seemed to be adapted for terrestrial locomotion, while a number of other parameters confirms their aquatic nature: Fossils have been found in marine strata. The nares are retracted and the orbits are raised like in other aquatic mammals. Levels of stable isotopes in their tooth enamel suggest an aquatic diet and environment (carbon and oxygen) and fresh or brackish water (strontium). Their spongy bone structure is similar to that of cetaceans. Based on a comparison of trunk and limb proportions, concluded that desmostylians were more terrestrial than aquatic and clearly fore limb-dominated swimmers, hence they were more similar to "sea bears" than "sea sloths" (as proposed by other researchers.) However, a more recent and detailed analysis of desmostylian bone structure has revealed them to be fully aquatic, like sirenians and cetaceans, with their limbs being incapable of supporting their own weight on land. More recent studies vindicate this assessment, as desmostylians had a thoracic morphology more similar to sirenians and modern cetaceans than to that of semiaquatic mammals. Its less dense bone structure suggests that Desmostylus had a lifestyle of active swimming and possibly feeding at the surface, while other desmostylians were primarily slow swimmers and/or bottom walkers and sea grass feeders. Habitat A 2017 study on Desmostylus and Paleoparadoxia shows that the former preferred areas shallower than 30 m, while the latter occurred in deep, offshore waters. Distribution Desmostylian fossils are known from the northern Pacific Rim, from southern Japan through Russia, the Aleutian Islands, and the Pacific coast of North America to the southern tip of Baja California. They range from the Early Oligocene to the late Miocene. Extinction Desmostylians, being fully marine herbivores, are thought to have been outcompeted ecologically by dugongid sirenians. In particular, later species like Neoparadoxia are more specialised than previous forms, suggesting increased divergence to compete with sirenians, and sirenian diversity appears to increase with desmostylian decline. Both desmostylians and North Pacific dugongids were apparently kelp specialists, as opposed to marine herbivorous mammals from other regions, with diets primarily composed of seagrass. Classification The type species Desmostylus hesperus was originally classified from a few teeth and vertebrae as a sirenian by , but doubts arose a decade later when more complete fossils were discovered in Japan. also proposed that they belonged to the Sirenia. One of the most comprehensive collections of desmostylian teeth was amassed by paleontologist John C. Merriam, who concluded on the basis of the molar structure and repeated occurrence in marine beds that the animals had been aquatic, and were probably sirenian. In 1926, the Austrian palaeontologist Othenio Abel suggested origins with monotremes, like the duck-billed platypus, and in 1933, he even created the order "Desmostyloidea", which he placed within the Multituberculata. Abel died shortly after World War II, and his classification won few supporters and has been ignored since. Because desmostylians were originally known only from skull fragments, teeth, and bits of other bones, general agreement was that they had had flippers and a fin-like tail. The discovery of a complete skeleton from Sakhalin Island in 1941, however, showed that they possessed four legs, with bones as stout as a hippopotamus', and justified the creation of a new order for the desmostylians, described by . A major find was announced in October 2015 after scientists examined an extensive group of giant, tusked, quadruped, marine mammal fossils. This northernmost to date species discovery had been unearthed during excavation for the construction of a school in Unalaska, in the Aleutian Islands. A rendition of a group was drawn by Alaskan artist Ray Troll. Despite their similarities to manatees and elephants, desmostylians were entirely unlike any living creatures. Douglas Emlong's 1971 discovery of the new genus Behemotops from Oregon showed that early desmostylians had more proboscidean-like teeth and jaws than later ones. Despite this discovery, their relationships to manatees and proboscids remain unresolved. The analysis of Cooper et al. (2014) indicates the similarities with manatees and elephants may be a result of convergence and that they may instead be basal perissodactyls. proposed a new classification of Paleoparadoxiidae: Order Desmostylia Reinhart, 1953 Family Paleoparadoxiidae Reinhart, 1959 Subfamily Behemotopsinae (Inuzuka, 1987) Behemotops Domning, Ray, and McKenna, 1986 Behemotops proteus Domning, Ray, and McKenna, 1986 (including Behemotops emlongi Domning, Ray, and McKenna, 1986) Behemotops katsuiei Inuzuka, 2000b Subfamily Paleoparadoxiinae (Reinhart, 1959) Archaeoparadoxia Archaeoparadoxia weltoni (Clark, 1991) Paleoparadoxia Reinhart, 1959 Paleoparadoxia tabatai (Tokunaga, 1939), (= Paleoparadoxia media Inuzuka, 2005) Neoparadoxia Barnes 2013 Neoparadoxia repenningi (Domning and Barnes, 2007) Neoparadoxia cecilialina Barnes 2013
Biology and health sciences
Other afrotheres
Animals
179242
https://en.wikipedia.org/wiki/Infertility
Infertility
Infertility is the inability of a couple to reproduce by natural means. It is usually not the natural state of a healthy adult. Exceptions include children who have not undergone puberty, which is the body's start of reproductive capacity. It is also a normal state in women after menopause. In humans, infertility is the inability to become pregnant after at least one year of unprotected and regular sexual intercourse involving a male and female partner. There are many causes of infertility, including some that medical intervention can treat. Estimates from 1997 suggest that worldwide about five percent of all heterosexual couples have an unresolved problem with infertility. Many more couples, however, experience involuntary childlessness for at least one year with estimates ranging from 12% to 28%. Male infertility is responsible for 20–30% of infertility cases, while 20–35% are due to female infertility, and 25–40% are due to combined problems in both partners. In 10–20% of cases, no cause is found. Male infertility is most commonly due to deficiencies in the semen, and semen quality is used as a surrogate measure of male fecundity. Male infertility may also be due to retrograde ejaculation, low testosterone, functional azoospermia (in which sperm is not produced or not produced in enough numbers) and obstructive azoospermia in which the pathway for the sperm (such as the vas deferens) is obstructed. The most common cause of female infertility is age, which generally manifests in sparse or absent menstrual periods leading up to menopause. As women age, the number of ovarian follicles and oocytes (eggs) decline, leading to a reduced ovarian reserve. Some women undergo primary ovarian insufficiency (also known as premature menopause) or the loss of ovarian function before age 40 leading to infertility. 85% of infertile couples have an identifiable cause and 15% is designated unexplained infertility. Of the 85% of identified infertility, 25% are due to disordered ovulation (of which 70% of the cases are due to polycystic ovarian syndrome). Tubal infertility, in which there is a structural problem with the fallopian tubes is responsible for 11-67% of infertility in women of child bearing age, with the large range in prevalence due to different populations studied. Endometriosis, the presence of endometrial tissue (which normally lines the uterus) outside of the uterus, accounts for 25-40% of female infertility. Women who are fertile experience a period of fertility before and during ovulation, and are infertile for the rest of the menstrual cycle. Fertility awareness methods are used to discern when these changes occur by tracking changes in cervical mucus or basal body temperature. Definition "Demographers tend to define infertility as childlessness in a population of women of reproductive age," whereas the epidemiological definition refers to "trying for" or "time to" a pregnancy, generally in a population of women exposed to a probability of conception. Currently, female fertility normally peaks in young adulthood and diminishes after 35 with pregnancy occurring rarely after age 50. A female is most fertile within 24 hours of ovulation. Male fertility peaks usually in young adulthood and declines after age 40. The time needed to pass (during which the couple tries to conceive) for that couple to be diagnosed with infertility differs between different organizations. Existing definitions of infertility lack uniformity, rendering comparisons in prevalence between countries or over time problematic. Therefore, data estimating the prevalence of infertility cited by various sources differ significantly. A couple that tries unsuccessfully to have a child after a certain period of time (often a short period, but definitions vary) is sometimes said to be subfertile, meaning less fertile than a typical couple. Both infertility and subfertility are defined similarly and often used interchangeably, but subfertility is the delay in conceiving within six to twelve months, whereas infertility is the inability to conceive naturally within a full year. World Health Organization The World Health Organization defines infertility as follows: United States One definition of infertility that is frequently used in the United States by reproductive endocrinologists, doctors who specialize in infertility, to consider a couple eligible for treatment is: a woman under 35 has not conceived after 12 months of contraceptive-free intercourse. a woman over 35 has not conceived after six months of contraceptive-free sexual intercourse. United Kingdom In the UK, previous NICE guidelines defined infertility as failure to conceive after regular unprotected sexual intercourse for two years in the absence of known reproductive pathology. Updated NICE guidelines do not include a specific definition, but recommend that "A woman of reproductive age who has not conceived after 1 year of unprotected vaginal sexual intercourse, in the absence of any known cause of infertility, should be offered further clinical assessment and investigation along with her partner, with earlier referral to a specialist if the woman is over 36 years of age." Other definitions Researchers commonly base demographic studies on infertility prevalence over a five-year period. Primary vs. secondary infertility Primary infertility is defined as the absence of a live birth for women who desire a child and have been in a union for at least 12 months, during which they have not used any contraceptives. The World Health Organisation also adds that 'women whose pregnancy spontaneously miscarries, or whose pregnancy results in a still born child, without ever having had a live birth would present with primarily infertility'. Secondary infertility is defined as the difficulty in conceiving a live birth in couples who previously had a child. Effects Psychological The consequences of infertility are mainfold and can include societal repercussions and personal suffering. Advances in assisted reproductive technologies, such as IVF, can offer hope to many couples where treatment is available, although barriers exist in terms of medical coverage and affordability. The medicalization of infertility has unwittingly led to a disregard for the emotional responses that couples experience, which include distress, loss of control, stigmatization, and a disruption in the developmental trajectory of adulthood. One of the main challenges in assessing the distress levels in women with infertility is the accuracy of self-report measures. It is possible that women "fake good" in order to appear mentally healthier than they are. It is also possible that women feel a sense of hopefulness/increased optimism prior to initiating infertility treatment, which is when most assessments of distress are collected. Some early studies concluded that infertile women did not report any significant differences in symptoms of anxiety and depression than fertile women. The further into treatment a patient goes, the more often they display symptoms of depression and anxiety. Patients with one treatment failure had significantly higher levels of anxiety, and patients with two failures experienced more depression when compared with those without a history of treatment. However, it has also been shown that the more depressed the infertile woman, the less likely she is to start infertility treatment and the more likely she is to drop out after only one cycle. Researchers have also shown that despite a good prognosis and having the finances available to pay for treatment, discontinuation is most often due to psychological reasons. Fertility does not seem to increase when the women takes antioxidants to reduce the oxidative stress brought by the situation. Infertility may have psychological effects. Parenthood is one of the major transitions in adult life for both men and women. The stress of the non-fulfilment of a wish for a child has been associated with emotional consequences such as anger, depression, anxiety, marital problems and feelings of worthlessness. Partners may become more anxious to conceive, increasing sexual dysfunction. Marital discord often develops, especially when they are under pressure to make medical decisions. Women trying to conceive often have depression rates similar to women who have heart disease or cancer. Emotional stress and marital difficulties are greater in couples where the infertility lies with the man. Male and female partner respond differently to infertility problems. In general, women show higher depression levels than their male partners when dealing with infertility. A possible explanation may be that women feel more responsible and guilty than men during the process of trying to conceive. On the other hand, infertile men experience a psychosomatic distress. Social Having a child is considered to be important in most societies. Infertile couples may experience social and family pressure leading to a feeling of social isolation. Factors of gender, age, religion, and socioeconomic status are important influences. Societal pressures may affect a couple's decision to approach, avoid, or experience an infertility treatment. Moreover, the socioeconomic status influences the psychology of the infertile couples: low socioeconomic status is associated with increased chances of developing depression. In many cultures, inability to conceive bears a stigma. In closed social groups, a degree of rejection (or a sense of being rejected by the couple) may cause considerable anxiety and disappointment. Some respond by actively avoiding the issue altogether. In the United States some treatments for infertility, including diagnostic tests, surgery and therapy for depression, can qualify one for Family and Medical Leave Act leave. It has been suggested that infertility be classified as a form of disability. Sexual Couples that suffer from infertility have a higher risk than other couples to develop sexual dysfunctions. The most common sexual issue facing the couples is a decline of sexual desire and erectile dysfunction. Causes Male infertility is responsible for 20–30% of infertility cases, while 20–35% are due to female infertility, and 25–40% are due to combined problems in both partners. In 10–20% of cases, no cause is found. The most common cause of female infertility are ovulation problems, usually manifested by scanty or absent menstrual periods. Male infertility is most commonly due to deficiencies in the semen, and semen quality is used as a surrogate measure of male fecundity. Iodine Deficiency Iodine deficiency may lead to infertility. Natural infertility Before puberty, humans are naturally infertile; their gonads have not yet developed the gametes required to reproduce: boys' testicles have not developed the sperm cells required to impregnate a female; girls have not begun the process of ovulation which activates the fertility of their egg cells (ovulation is confirmed by the first menstrual cycle, known as menarche, which signals the biological possibility of pregnancy). Infertility in children is commonly referred to as prepubescence (or being prepubescent, an adjective used to also refer to humans without secondary sex characteristics). The absence of fertility in children is considered a natural part of human growth and child development, as the hypothalamus in their brain is still underdeveloped and cannot release the hormones required to activate the gonads' gametes. Fertility in children before the ages of eight or nine is considered a disease known as precocious puberty. This disease is usually triggered by a brain tumor or other related injury. Delayed puberty Delayed puberty, puberty absent past or occurring later than the average onset (between the ages of ten and fourteen), may be a cause of infertility. In the United States, girls are considered to have delayed puberty if they have not started menstruating by age 16 (alongside lacking breast development by age 13). Boys are considered to have delayed puberty if they lack enlargement of the testicles by age 14. Delayed puberty affects about 2% of adolescents. Most commonly, puberty may be delayed for several years and still occur normally, in which case it is considered constitutional delay of growth and puberty, a common variation of healthy physical development. Delay of puberty may also occur due to various causes such as malnutrition, various systemic diseases, or defects of the reproductive system (hypogonadism) or the body's responsiveness to sex hormones. Immune infertility Antisperm antibodies (ASA) have been considered as infertility cause in around 10–30% of infertile couples. In both men and women, ASA production are directed against surface antigens on sperm, which can interfere with sperm motility and transport through the female reproductive tract, inhibiting capacitation and acrosome reaction, impaired fertilization, influence on the implantation process, and impaired growth and development of the embryo. The antibodies are classified into different groups: There are IgA, IgG and IgM antibodies. They also differ in the location of the spermatozoon they bind on (head, mid piece, tail). Factors contributing to the formation of antisperm antibodies in women are disturbance of normal immunoregulatory mechanisms, infection, violation of the integrity of the mucous membranes, rape and unprotected oral or anal sex. Risk factors for the formation of antisperm antibodies in men include the breakdown of the blood‑testis barrier, trauma and surgery, orchitis, varicocele, infections, prostatitis, testicular cancer, failure of immunosuppression and unprotected receptive anal or oral sex with men. Sexually transmitted infections Infections with the following sexually transmitted pathogens have a negative effect on fertility: Chlamydia trachomatis and Neisseria gonorrhoeae. There is a consistent association of Mycoplasma genitalium infection and female reproductive tract syndromes. M. genitalium infection is associated with increased risk of infertility. Genetic Mutations to NR5A1 gene encoding steroidogenic factor 1 (SF-1) have been found in a small subset of men with non-obstructive male factor infertility where the cause is unknown. Results of one study investigating a cohort of 315 men revealed changes within the hinge region of SF-1 and no rare allelic variants in fertile control men. Affected individuals displayed more severe forms of infertility such as azoospermia and severe oligozoospermia. Small supernumerary marker chromosomes are abnormal extra chromosomes; they are three times more likely to occur in infertile individuals and account for 0.125% of all infertility cases. See Infertility associated with small supernumerary marker chromosomes and Genetics of infertility#Small supernumerary marker chromosomes and infertility. Other causes Factors that can cause male as well as female infertility are: DNA damage DNA damage reduces fertility in female ovocytes, as caused by smoking, other xenobiotic DNA damaging agents (such as radiation or chemotherapy) or accumulation of the oxidative DNA damage 8-hydroxy-deoxyguanosine DNA damage reduces fertility in male sperm, as caused by oxidative DNA damage, smoking, other xenobiotic DNA damaging agents (such as drugs or chemotherapy) or other DNA damaging agents including reactive oxygen species, fever or high testicular temperature. The damaged DNA related to infertility manifests itself by the increased susceptibility to denaturation inducible by heat or acid or by the presence of double-strand breaks that can be detected by the TUNEL assay. In this assay, the sperm's DNA will be denaturated and renatured. If DNA fragmentation occurs (double and single-strand-breaks) a halo will not appear surrounding the spermatozoas, but if the spermatozoa does not have DNA damaged, a halo surrounding the spermatozoa could be visualized under the microscope. General factors Diabetes mellitus, thyroid disorders, undiagnosed and untreated coeliac disease, adrenal disease Hypothalamic-pituitary factors Hyperprolactinemia Hypopituitarism The presence of anti-thyroid antibodies is associated with an increased risk of unexplained subfertility with an odds ratio of 1.5 and 95% confidence interval of 1.1–2.0. Environmental factors Toxins such as glues, volatile organic solvents or silicones, physical agents, flame retardants, chemical dusts, polychlorinated biphenyls, and pesticides. Tobacco smokers are 60% more likely to be infertile than non-smokers. Other diseases such as chlamydia, and gonorrhea can also cause infertility, due to internal scarring (fallopian tube obstruction). Body mass, the BMI (body mass index) (either being too high or too low) may be a contributor to infertility. Obesity: Obesity can have a significant impact on male and female fertility. In females, a BMI above 27 increases the risk of infertility 3-fold. Obese women have a higher rate of recurrent, early miscarriage compared to non-obese women. In males, an increase in BMI above 30 may be associated with reduced sperm quality and impaired spermatogenesis leading to infertility. In males, a high BMI is also associated with low testosterone levels (secondary hypogonadism) and erectile dysfunction which contributes to infertility. Low weight: females with a very low BMI may have infertility. Common causes of low BMI leading to infertility include anorexia nervosa and other eating disorders, excessive exercise or relative energy deficiency in sport. Infertility in females with a low BMI is usually due to functional hypothalamic amenorrhea due to stress induced inhibition of the hypothalamic pituitary ovarian axis. Females The following causes of infertility may only be found in females. For a woman to conceive, certain things have to happen: vaginal intercourse must take place around the time when an egg is released from her ovary; the system that produces eggs has to be working at optimum levels; and her hormones must be balanced. For women, problems with fertilization arise mainly from either structural problems in the fallopian tube or uterus or problems releasing eggs. Infertility may be caused by blockage of the fallopian tube due to malformations, infections such as chlamydia or scar tissue. For example, endometriosis can cause infertility with the growth of endometrial tissue in the fallopian tubes or around the ovaries. Endometriosis is usually more common in women in their mid-twenties and older, especially when postponed childbirth has taken place. Another major cause of infertility in women may be the inability to ovulate. Ovulatory disorders make up 25% of the known causes of female infertility. Oligo-ovulation or anovulation results in infertility because no oocyte will be released monthly. In the absence of an oocyte, there is no opportunity for fertilization and pregnancy. World Health Organization subdivided ovulatory disorders into four classes: Hypogonadotropic hypogonadal anovulation: i.e., hypothalamic amenorrhea Normogonadotropic normoestrogenic anovulation: i.e., polycystic ovarian syndrome (PCOS) Hypergonadotropic hypoestrogenic anovulation: i.e., premature ovarian failure Hyperprolactinemic anovulation: i.e., pituitary adenoma Malformation of the eggs themselves may complicate conception. For example, polycystic ovarian syndrome (PCOS) is when the eggs only partially develop within the ovary and there is an excess of male hormones. Some women are infertile because their ovaries do not mature and release eggs. In this case, synthetic FSH by injection or Clomid (Clomiphene citrate) via a pill can be given to stimulate follicles to mature in the ovaries. Other factors that can affect a woman's chances of conceiving include being overweight or underweight, or her age as female fertility declines after the age of 30. Sometimes it can be a combination of factors, and sometimes a clear cause is never established. Common causes of infertility of females include: ovulation problems (e.g. PCOS, the leading reason why women present to fertility clinics due to anovulatory infertility) tubal blockage pelvic inflammatory disease caused by infections like tuberculosis age-related factors uterine problems previous tubal ligation endometriosis advanced maternal age immune infertility Males Male infertility is defined as the inability of a male to make a fertile female pregnant, for a minimum of at least one year of unprotected intercourse. Male infertility is estimated to contribute to 35% infertility in couples. There are multiple causes for male infertility including endocrine disorders (usually due to hypogonadism) at an estimated 2% to 5%, sperm transport disorders at 5%, primary testicular defects (which includes abnormal sperm parameters without any identifiable cause) at 65% to 80% and idiopathic (where an infertile male has normal sperm and semen parameters) at 10% to 20%. The main cause of male infertility is low semen quality. In men who have the necessary reproductive organs to procreate, infertility can be caused by low sperm count due to endocrine problems, drugs, radiation, or infection. There may be testicular malformations, hormone imbalance, or blockage of the man's duct system. Although many of these can be treated through surgery or hormonal substitutions, some may be indefinite. Infertility associated with viable, but immotile sperm may be caused by primary ciliary dyskinesia. The sperm must provide the zygote with DNA, centrioles, and activation factor for the embryo to develop. A defect in any of these sperm structures may result in infertility that will not be detected by semen analysis. Antisperm antibodies cause immune infertility. Cystic fibrosis can lead to infertility in men by blocking the vas deferens. Adeno-associated virus infection has been linked to poor sperm quality and may contribute to male infertility, based on small observational studies. Unexplained infertility In the US, up to 15% of infertile couples have unexplained infertility, in which no identifiable cause is found. polymorphisms in folate pathway genes may be a cause for fertility complications in some women with unexplained infertility. Epigenetic modifications in sperm may be also be responsible for unexplaiend infertility. Diagnosis If both partners are young and healthy and have been trying to conceive for one year without success, a visit to a physician or women's health nurse practitioner (WHNP) could help to highlight potential medical problems earlier rather than later. The doctor or WHNP may also be able to suggest lifestyle changes to increase the chances of conceiving. However, there are instances where couples should seek reproductive counseling after only 6 months of trying for a pregnancy: The woman is over 35 years old. The woman has a history of endometriosis. The woman has infrequent or irregular menses. There is a male factor involved. A doctor or WHNP takes a medical history and gives a physical examination. They can also carry out some basic tests on both partners to see if there is an identifiable reason for not having achieved a pregnancy. Among these tests, blood tests are common and may include serologies to detect infections such as hepatitis B (HBV), hepatitis C (HCV), HIV, syphilis, and rubella. Optional tests like karyotypes can also be performed. For females, specific tests might include measuring antimüllerian hormone (AMH) to assess ovarian reserve, thyroid-stimulating hormone (TSH), prolactin (PRL), and vitamin D levels, which can influence fertility. If necessary, they refer patients to a fertility clinic or local hospital for more specialized tests. The results of these tests help determine the best fertility treatment. Treatment Treatment depends on the cause of infertility, but may include counselling, fertility treatments, which include in vitro fertilization. According to ESHRE recommendations, couples with an estimated live birth rate of 40% or higher per year are encouraged to continue aiming for a spontaneous pregnancy. Drugs used include clomiphene citrate, human menopausal gonadotropin (hMG), follicle-stimulating hormone (FSH), human chorionic gonadotropin (hCG), gonadotropin-releasing hormone (GnRH) analogues, and aromatase inhibitors. Medical treatments Clomiphene is a selective estrogen receptor modulator used for induction of ovulation. It works by blocking the negative feedback from estrogen, creating a gonadotropin releasing hormone (GnRH) increase, which causes release of leutenizing hormone (LH) and follicle stimulating hormone (FSH) from the anterior pituitary. FSH and LH act on the ovaries to increase follicle growth and lead to ovulation. Letrozole is an aromatase inhibitor which reduces estradiol levels and increases levels of FSH and LH which can stimulate ovarian follicle maturation and ovulation. Letrozole is the preferred treatment in those with infertility due to PCOS and is associated with a higher pregnancy rate than other treatments. Both clomiphene and letrozole have a risk of a multiple gestation pregnancy, with the risk being less than 10%. Those with hypogonadotropic hypogonadism require pulsatile GnRH therapy, which is associated with a 93-100% pregnancy rate after 6 months of therapy. The risk of a multiple gestation pregnancy with gonadotropins is 36%. Ovarian stimulation with clomiphene, aromatase inhibitors, or gonadotropins (especially when combined with intrauterine insemination) have a risk of ovarian hyperstimulation syndrome which may occur in 1-5% of cycles and presents as ascites, electrolyte abnormalities and blood clots. Fertility treatments or medications do not increase the risk of breast, ovarian or endometrial cancers. Metformin does not increase the rate of live births in those with infertility (including in those with PCOS) and its use is not recommended. In some cases, in vitro fertilization (IVF) is used in which induced ovarian follicle stimulation is followed by extraction of oocytes from the ovaries. The oocytes are then fertilized in vitro by sperm using Intracytoplasmic sperm injection (ICSI) and the fertilized eggs are re-introduced into the uterus in a procedure called embryo transfer. ICSI was first developed in 1978 by Robert Edwards and Patrick Steptoe. Ovarian stimulation (such as with clomiphene) combined with in-vitro fertilization or intra-uterine insemination have lower success rates with increasing age. Sperm or oocyte donors with in vitro fertilization and gestational carriers are sometimes used for gay couples, those with severe medical conditions which make pregnancy dangerous or precluding pregnancy, those with severe infertility or females with a non-functioning uterus. Tourism Fertility tourism is the practice of traveling to another country for fertility treatments. Stem cell therapy There are several experimental treatments related to stem cell therapy not yet routinely used in reproductive medicine. These treatments may provide the opportunity for a live birth for people who lack of gametes and also for same-sex couples and single people who want to have offspring. Theoretically, with this therapy, artificial gametes can be produced in vitro. Spermatogonial stem cells transplant takes places in the seminiferous tubule with the patient experiencing spermatogenesis. This therapy is sometimes used cancer patients, whose sperm have been destroyed due to the gonadotoxic treatment. Ovarian stem cells may be used to generate new oocytes which can then be implanted in the uterus after in-vitro fertilization. This therapy is still in the experimental phase. Epidemiology Prevalence of infertility varies depending on the definition, i.e. on the time span involved in the failure to conceive. Infertility rates have increased by 4% since the 1980s, mostly from problems with fecundity due to an increase in age. Fertility problems affect one in seven couples in the UK. Most couples (about 84%) who have regular sexual intercourse (that is, every two to three days) and who do not use contraception get pregnant within a year. About 95 out of 100 couples who are trying to get pregnant do so within two years. Women become less fertile as they get older. For women aged 35, about 94% who have regular unprotected sexual intercourse get pregnant after three years of trying. For women aged 38, however, only about 77%. The effect of age upon men's fertility is less clear. In people going forward for IVF in the UK, roughly half of fertility problems with a diagnosed cause are due to problems with the man, and about half due to problems with the woman. However, about one in five cases of infertility have no clear diagnosed cause. In Britain, male factor infertility accounts for 25% of infertile couples, while 25% remain unexplained. 50% are female causes with 25% being due to anovulation and 25% tubal problems/other. In Sweden, approximately 10% of couples wanting children are infertile. In approximately one-third of these cases the man is the factor, in one third the woman is the factor, and in the remaining third the infertility is a product of factors on both parts. In many lower-income countries, estimating infertility is difficult due to incomplete information and infertility and childlessness stigmas. Data on income-limited individuals, male infertility, and fertility within non-traditional families may be limited due to traditional social norms. Historical data on fertility and infertility is limited as any form of study or tracking only began in the early 20th century. Per one account, "The invisibility of marginalised social groups in infertility tracking reflects broader social beliefs about who can and should reproduce. The offspring of privileged social groups are seen as a boon to society. The offspring of marginalised groups are perceived as a burden." Society and culture Perhaps except for infertility in science fiction, films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing first in the latter part of the 2000s decade, although the techniques have been available for decades. Yet, the number of people that can relate to it by personal experience in one way or another is ever-growing, and the variety of trials and struggles is huge. Pixar's Up contains a depiction of infertility in an extended life montage that lasts the first few minutes of the film. Other individual examples are referred to individual sub-articles of assisted reproductive technology Ethics There are several ethical issues associated with infertility and its treatment. High-cost treatments are out of financial reach for some couples. Debate over whether health insurance companies (e.g. in the US) should be required to cover infertility treatment. Allocation of medical resources that could be used elsewhere The legal status of embryos fertilized in vitro and not transferred in vivo. (
Biology and health sciences
Health and fitness
null
179252
https://en.wikipedia.org/wiki/Gastropoda
Gastropoda
Gastropods (), commonly known as slugs and snails, belong to a large taxonomic class of invertebrates within the phylum Mollusca called Gastropoda (). This class comprises snails and slugs from saltwater, freshwater, and from the land. There are many thousands of species of sea snails and slugs, as well as freshwater snails, freshwater limpets, land snails and slugs. The class Gastropoda is a diverse and highly successful class of mollusks within the phylum Mollusca. It contains a vast total of named species, second only to the insects in overall number. The fossil history of this class goes back to the Late Cambrian. , 721 families of gastropods are known, of which 245 are extinct and appear only in the fossil record, while 476 are currently extant with or without a fossil record. Gastropoda (previously known as univalves and sometimes spelled "Gasteropoda") are a major part of the phylum Mollusca, and are the most highly diversified class in the phylum, with 65,000 to 80,000 living snail and slug species. The anatomy, behavior, feeding, and reproductive adaptations of gastropods vary significantly from one clade or group to another, so stating many generalities for all gastropods is difficult. The class Gastropoda has an extraordinary diversification of habitats. Representatives live in gardens, woodland, deserts, and on mountains; in small ditches, great rivers, and lakes; in estuaries, mudflats, the rocky intertidal, the sandy subtidal, the abyssal depths of the oceans, including the hydrothermal vents, and numerous other ecological niches, including parasitic ones. Although the name "snail" can be, and often is, applied to all the members of this class, commonly this word means only those species with an external shell big enough that the soft parts can withdraw completely into it. Slugs are gastropods that have no shell or a very small, internal shell; semislugs are gastropods that have a shell that they can partially retreat into but not entirely. The marine shelled species of gastropods include species such as abalone, conches, periwinkles, whelks, and numerous other sea snails that produce seashells that are coiled in the adult stage—though in some, the coiling may not be very visible, for example in cowries. In a number of families of species, such as all the various limpets, the shell is coiled only in the larval stage, and is a simple conical structure after that. Etymology In the scientific literature, gastropods were described as "gasteropodes" by in 1795. The word gastropod comes from Greek ( 'stomach') and ( 'foot'), a reference to the fact that the animal's "foot" is positioned below its guts. The earlier name "univalve" means one valve (or shell), in contrast to bivalves, such as clams, which have two valves or shells. Diversity At all taxonomic levels, gastropods are second only to insects in terms of their diversity. Gastropods have the greatest numbers of named mollusk species. However, estimates of the total number of gastropod species vary widely, depending on cited sources. The number of gastropod species can be ascertained from estimates of the number of described species of Mollusca with accepted names: about 85,000 (minimum 50,000, maximum 120,000). But an estimate of the total number of Mollusca, including undescribed species, is about 240,000 species. The estimate of 85,000 mollusks includes 24,000 described species of terrestrial gastropods. Different estimates for aquatic gastropods (based on different sources) give about 30,000 species of marine gastropods, and about 5,000 species of freshwater and brackish gastropods. Many deep-sea species remain to be discovered, as only 0.0001% of the deep-sea floor has been studied biologically. The total number of living species of freshwater snails is about 4,000. Recently extinct species of gastropods (extinct since 1500) number 444, 18 species are now extinct in the wild (but still exist in captivity), and 69 species are "possibly extinct". The number of prehistoric (fossil) species of gastropods is at least 15,000 species. In marine habitats, the continental slope and the continental rise are home to the highest diversity, while the continental shelf and abyssal depths have a low diversity of marine gastropods. Habitat Gastropods are found in a wide range of aquatic and terrestrial habitats, from deep ocean trenches to deserts. Some of the more familiar and better-known gastropods are terrestrial gastropods (the land snails and slugs). Some live in fresh water, but most named species of gastropods live in a marine environment. Gastropods have a worldwide distribution, from the near Arctic and Antarctic zones to the tropics. They have become adapted to almost every kind of existence on earth, having colonized nearly every available medium. In habitats where not enough calcium carbonate is available to build a really solid shell, such as on some acidic soils on land, various species of slugs occur, and also some snails with thin, translucent shells, mostly or entirely composed of the protein conchiolin. Snails such as Sphincterochila boissieri and Xerocrassa seetzeni have adapted to desert conditions. Other snails have adapted to an existence in ditches, near deepwater hydrothermal vents, in oceanic trenches 10,000 meters (6 miles) below the surface, the pounding surf of rocky shores, caves, and many other diverse areas. Gastropods can be accidentally transferred from one habitat to another by other animals, e.g. by birds. Anatomy Snails are distinguished by an anatomical process known as torsion, where the visceral mass of the animal rotates 180° to one side during development, such that the anus is situated more or less above the head. This process is unrelated to the coiling of the shell, which is a separate phenomenon. Torsion is present in all gastropods, but the opisthobranch gastropods are secondarily untorted to various degrees. Torsion occurs in two stages. The first, mechanistic stage is muscular, and the second is mutagenetic. The effects of torsion are primarily physiological. The organism develops by asymmetrical growth, with the majority of growth occurring on the left side. This leads to the loss of right-side anatomy that in most bilaterians is a duplicate of the left side anatomy. The essential feature of this asymmetry is that the anus generally lies to one side of the median plane. The gill-combs, the olfactory organs, the foot slime-gland, nephridia, and the auricle of the heart are single or at least are more developed on one side of the body than the other. Furthermore, there is only one genital orifice, which lies on the same side of the body as the anus. Furthermore, the anus becomes redirected to the same space as the head. This is speculated to have some evolutionary function, as prior to torsion, when retracting into the shell, first the posterior end would get pulled in, and then the anterior. Now, the front can be retracted more easily, perhaps suggesting a defensive purpose. Gastropods typically have a well-defined head with two or four sensory tentacles with eyes, and a ventral foot. The foremost division of the foot is called the propodium. Its function is to push away sediment as the snail crawls. The larval shell of a gastropod is called a protoconch. Shell Most shelled gastropods have a one piece shell (with exceptional bivalved gastropods), typically coiled or spiraled, at least in the larval stage. This coiled shell usually opens on the right-hand side (as viewed with the shell apex pointing upward). Numerous species have an operculum, which in many species acts as a trapdoor to close the shell. This is usually made of a horn-like material, but in some molluscs it is calcareous. In the land slugs, the shell is reduced or absent, and the body is streamlined. Some gastropods have adult shells which are bottom heavy due to the presence of a thick, often broad, convex ventral callus deposit on the inner lip and adapical to the aperture which may be important for gravitational stability. Body wall Some sea slugs are very brightly colored. This serves either as a warning, when they are poisonous or contain stinging cells, or to camouflage them on the brightly colored hydroids, sponges, and seaweeds on which many of the species are found. Lateral outgrowths on the body of nudibranchs are called cerata. These contain an outpocketing of digestive glands called the diverticula. Sensory organs and nervous system The sensory organs of gastropods include olfactory organs, eyes, statocysts and mechanoreceptors. Gastropods have no hearing. In terrestrial gastropods (land snails and slugs), the olfactory organs, located on the tips of the four tentacles, are the most important sensory organ. The chemosensory organs of opisthobranch marine gastropods are called rhinophores. The majority of gastropods have simple visual organs, eye spots either at the tip or base of the tentacles. However, "eyes" in gastropods range from simple ocelli that only distinguish light and dark, to more complex pit eyes, and even to lens eyes. In land snails and slugs, vision is not the most important sense, because they are mainly nocturnal animals. The nervous system of gastropods includes the peripheral nervous system and the central nervous system. The central nervous system consists of ganglia connected by nerve cells. It includes paired ganglia: the cerebral ganglia, pedal ganglia, osphradial ganglia, pleural ganglia, parietal ganglia and the visceral ganglia. There are sometimes also buccal ganglia. Digestive system The radula of a gastropod is usually adapted to the food that a species eats. The simplest gastropods are the limpets and abalone, herbivores that use their hard radula to rasp at seaweeds on rocks. Many marine gastropods are burrowers, and have a siphon that extends out from the mantle edge. Sometimes the shell has a siphonal canal to accommodate this structure. A siphon enables the animal to draw water into their mantle cavity and over the gill. They use the siphon primarily to "taste" the water to detect prey from a distance. Gastropods with siphons tend to be either predators or scavengers. Respiratory system Almost all marine gastropods breathe with a gill, but many freshwater species, and the majority of terrestrial species, have a pallial lung. The respiratory protein in almost all gastropods is hemocyanin, but one freshwater pulmonate family, the Planorbidae, have hemoglobin as the respiratory protein. In one large group of sea slugs, the gills are arranged as a rosette of feathery plumes on their backs, which gives rise to their other name, nudibranchs. Some nudibranchs have smooth or warty backs with no visible gill mechanism, such that respiration may likely take place directly through the skin. Circulatory system Gastropods have open circulatory system and the transport fluid is hemolymph. Hemocyanin is present in the hemolymph as the respiratory pigment. Excretory system The primary organs of excretion in gastropods are nephridia, which produce either ammonia or uric acid as a waste product. The nephridium also plays an important role in maintaining water balance in freshwater and terrestrial species. Additional organs of excretion, at least in some species, include pericardial glands in the body cavity, and digestive glands opening into the stomach. Reproductive system Courtship is a part of mating behavior in some gastropods, including some of the Helicidae. Again, in some land snails, an unusual feature of the reproductive system of gastropods is the presence and utilization of love darts. In many marine gastropods other than the opisthobranchs, there are separate sexes (dioecious/gonochoric); most land gastropods, however, are hermaphrodites. Life cycle Courtship is a part of the behavior of mating gastropods with some pulmonate families of land snails creating and utilizing love darts, the throwing of which have been identified as a form of sexual selection. The main aspects of the life cycle of gastropods include: Egg laying and the eggs of gastropods The embryonic development of gastropods The larvae or larval stadium: some gastropods may be trochophore and/or veliger Estivation and hibernation (each of these are present in some gastropods only) The growth of gastropods Courtship and mating in gastropods: fertilization is internal or external according to the species. External fertilization is common in marine gastropods. Feeding behavior The diet of gastropods differs according to the group considered. Marine gastropods include some that are herbivores, detritus feeders, predatory carnivores, scavengers, parasites, and also a few ciliary feeders, in which the radula is reduced or absent. Land-dwelling species can chew up leaves, bark, fruit, fungi, and decomposing animals while marine species can scrape algae off the rocks on the seafloor. Certain species such as the Archaeogastropda maintain horizontal rows of slender marginal teeth. In some species that have evolved into endoparasites, such as the eulimid Thyonicola doglieli, many of the standard gastropod features are strongly reduced or absent. A few sea slugs are herbivores and some are carnivores. The carnivorous habit is due to specialisation. Many gastropods have distinct dietary preferences and regularly occur in close association with their food species. Some predatory carnivorous gastropods include: cone shells, Testacella, Daudebardia, turrids, ghost slugs and others. Terrestrial gastropods Studies based on direct observations, fecal and gut analyses, as well as food-choice experiments, have revealed that snails and slugs consume a wide variety of food resources. Their diet spans from living plants at various developmental stages such as pollen, seeds, seedlings, and wood, to decaying plant material like leaf litter. Additionally, they feed on fungi, lichens, algae, soil, and even other animals, both living and dead, including their feces. Given this diverse diet, terrestrial gastropods can be classified as herbivores, omnivores, carnivores, and detritivores. However, the majority are microbivores, primarily consuming microbes associated with decaying organic material. Despite their ecological importance, there is a notable lack of research exploring the specific roles that terrestrial gastropods play within soil food webs. Fungivory Many terrestrial gastropod mollusks are known to consume fungi, a behavior observed in various species of snails and slugs across distinct families. Notable examples of fungivore slugs include members of the family Philomycidae, which feed on slime molds (myxomycetes), and the Ariolimacidae, which primarily consume mushrooms (basidiomycetes). Snail families that contain fungivore species include Clausiliidae, Macrocyclidae, and Polygyridae. Mushroom-producing fungi used as a food source by snails and slugs include species from several genera. Some examples are milk-caps (Lactarius spp.), the oyster mushroom (Pleurotus ostreatus), and the penny bun. Additionally, slugs feed on fungi from other genera, such as Agaricus, Pleurocybella, and Russula. Snails have also been reported to feed on penny buns as well as Coprinellus, Aleurodiscus, Armillaria, Grifola , Marasmiellus, Mycena, Pholiota, and Ramaria. As for slime molds, commonly consumed species include Stemonitis axifera and Symphytocarpus flaccidus. Feeding behaviors in slugs exhibit considerable variation. Some species display selectivity, consuming specific parts or developmental stages of fungi. For instance, certain slugs may target fungi only at particular stages of maturity, such as immature fruiting bodies or spore-producing structures. Conversely, other species show little to no selectivity, consuming entire mushrooms regardless of developmental stage. This variability stresses the diverse dietary adaptations among slug species and their ecological roles in fungal consumption. Moreover, by consuming fungi, snails and slugs can also indirectly help in their dispersal by carrying along some of their spores or the fungi themselves. Genetics Gastropods exhibit an important degree of variation in mitochondrial gene organization when compared to other animals. Main events of gene rearrangement occurred at the origin of Patellogastropoda and Heterobranchia, whereas fewer changes occurred between the ancestors of Vetigastropoda (only tRNAs D, C and N) and Caenogastropoda (a large single inversion, and translocations of the tRNAs D and N). Within Heterobranchia, gene order seems relatively conserved, and gene rearrangements are mostly related with transposition of tRNA genes. Geological history and evolution The first gastropods were exclusively marine, with the earliest known representatives appearing in the Late Cambrian (e.g., Chippewaella, Strepsodiscus). However, their only definitive gastropod feature is a coiled shell, which raises the possibility that they may belong to the stem lineage of gastropods, or might not be gastropods at all. Early Cambrian species such as Helcionella, Barskovia, and Scenella are no longer considered gastropods, and the small coiled Aldanella from the same period is probably not even a mollusk. It is not until the Ordovician that true crown-group gastropods appear. By this time, gastropods had diversified into a variety of forms and inhabited a range of aquatic environments. Fossil gastropods from the early Paleozoic are often poorly preserved, making identification difficult. However, the Silurian genus Poleumita contains at least 15 identified species. Overall, gastropods were less common in the Paleozoic than bivalves. Most Paleozoic gastropods belong to primitive groups, some of which still exist today. By the Carboniferous period, many gastropod shell shapes found in fossils resemble those of modern species, though most of these early forms are not directly related to living gastropods. It was during the Mesozoic era that the ancestors of many extant gastropods evolved. One of the earliest known terrestrial gastropods is Anthracopupa (or Maturipupa), found in the Carboniferous Coal Measures of Europe. However, land snails and their relatives were rare before the Cretaceous period. In Mesozoic rocks, gastropods become more common in the fossil record, with well-preserved shells. Fossils are found in ancient beds from both freshwater and marine environments. Notable examples include the Purbeck Marble of the Jurassic and the Sussex Marble of the early Cretaceous, both from southern England. These limestones contain abundant remains of the pond snail Viviparus. Cenozoic rocks yield vast numbers of gastropod fossils, many of which are closely related to modern species. The diversity of gastropods increased significantly at the start of this era, alongside that of bivalves. Certain trail-like markings preserved in ancient sedimentary rocks are thought to have been made by gastropods crawling over the soft mud and sand. Although these trace fossils are of debatable origin, some of them do resemble the trails made by living gastropods today. Gastropod fossils may sometimes be confused with ammonites or other shelled cephalopods. An example of this is Bellerophon from the limestones of the Carboniferous period in Europe, the shell of which is planispirally coiled and can be mistaken for the shell of a cephalopod. Gastropods also provide important evidence of faunal changes during the Pleistocene epoch, reflecting the impacts of advancing and retreating ice sheets. Phylogeny A cladogram showing the phylogenic relationships of Gastropoda with example species: Cocculiniformia, Neomphalina and Lower Heterobranchia are not included in the above cladogram. Taxonomy Current classification The present backbone classification of gastropods relies on the results of phylogenomic analyses. Consensus has not been reached yet considering the relationships at the very base of the gastropod tree of life, but otherwise the major groups are known with confidence. Gastropoda Adenogonogastropoda (Angiogastropoda) Apogastropoda Caenogastropoda Heterobranchia Neritimorpha Patellogastropoda Vetigastropoda (including Neomphaliones) History Since Darwin, biological taxonomy has attempted to reflect the phylogeny of organisms, i.e., the tree of life. The classifications used in taxonomy attempt to represent the precise interrelatedness of the various taxa. However, the taxonomy of the Gastropoda is constantly being revised and so the versions shown in various texts can differ in major ways. In the older classification of the gastropods, there were four subclasses: Opisthobranchia (gills to the right and behind the heart). Gymnomorpha (no shell) Prosobranchia (gills in front of the heart). Pulmonata (with a lung instead of gills) The taxonomy of the Gastropoda is still under revision, and more and more of the old taxonomy is being abandoned, as the results of DNA studies slowly become clearer. Nevertheless, a few of the older terms such as "opisthobranch" and "prosobranch" are still sometimes used in a descriptive way. New insights based on DNA sequencing of gastropods have produced some revolutionary new taxonomic insights. In the case of the Gastropoda, the taxonomy is now gradually being rewritten to embody strictly monophyletic groups (only one lineage of gastropods in each group). Integrating new findings into a working taxonomy remain challenging. Consistent ranks within the taxonomy at the level of subclass, superorder, order, and suborder have already been abandoned as unworkable. Ongoing revisions of the higher taxonomic levels are expected in the near future. Convergent evolution, which appears to exist at especially high frequency in gastropods, may account for the observed differences between the older phylogenies, which were based on morphological data, and more recent gene-sequencing studies. In 2004, Brian Simison and David R. Lindberg showed possible diphyletic origins of the Gastropoda based on mitochondrial gene order and amino acid sequence analyses of complete genes. In 2005, Philippe Bouchet and Jean-Pierre Rocroi made sweeping changes in the systematics, resulting in the Bouchet & Rocroi taxonomy, which is a step closer to the evolutionary history of the phylum. The Bouchet & Rocroi classification system is based partly on the older systems of classification, and partly on new cladistic research. In the past, the taxonomy of gastropods was largely based on phenetic morphological characters of the taxa. The recent advances are more based on molecular characters from DNA and RNA research. This has made the taxonomical ranks and their hierarchy controversial. In 2017, Bouchet, Rocroi, and other collaborators published a significantly updated version of the 2005 taxonomy. In the Bouchet et al. taxonomy, the authors used unranked clades for taxa above the rank of superfamily (replacing the ranks suborder, order, superorder and subclass), while using the traditional Linnaean approach for all taxa below the rank of superfamily. Whenever monophyly has not been tested, or is known to be paraphyletic or polyphyletic, the term "group" or "informal group" has been used. The classification of families into subfamilies is often not well resolved. Fixed ranks like family, genus, and species however remain useful for practical classification and remain used in the World Register of Marine Species (WoRMS). Also many researchers continue to use traditional ranks because they are entrenched in the literature and familiar to specialists and non-specialists alike. Ecology and conservation Many gastropod species face threats from habitat destruction, pollution, and climate change. Some species are endangered or have become extinct due to these factors. Conservation efforts often focus on protecting their habitats, especially in freshwater and terrestrial ecosystems. Predators Gastropods are prey to a wide range of organisms depending on the environment. In marine habitats, gastropods are preyed upon by fish, marine birds, marine mammals, crustaceans, and other mollusks such as cephalopods. In terrestrial environments, gastropod predators include insects, arachnids (spiders, harvestmen), birds, and mammals, among others.
Biology and health sciences
Gastropods
Animals
179260
https://en.wikipedia.org/wiki/No-hair%20theorem
No-hair theorem
The no-hair theorem (which is a hypothesis) states that all stationary black hole solutions of the Einstein–Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three independent externally observable classical parameters: mass, angular momentum, and electric charge. Other characteristics (such as geometry and magnetic moment) are uniquely determined by these three parameters, and all other information (for which "hair" is a metaphor) about the matter that formed a black hole or is falling into it "disappears" behind the black-hole event horizon and is therefore permanently inaccessible to external observers after the black hole "settles down" (by emitting gravitational and electromagnetic waves). Physicist John Archibald Wheeler expressed this idea with the phrase "black holes have no hair", which was the origin of the name. In a later interview, Wheeler said that Jacob Bekenstein coined this phrase. Richard Feynman objected to the phrase that seemed to me to best symbolize the finding of one of the graduate students: graduate student Jacob Bekenstein had shown that a black hole reveals nothing outside it of what went in, in the way of spinning electric particles. It might show electric charge, yes; mass, yes; but no other features or as he put it, "A black hole has no hair". Richard Feynman thought that was an obscene phrase and he didn't want to use it. But that is a phrase now often used to state this feature of black holes, that they don't indicate any other properties other than a charge and angular momentum and mass. The first version of the no-hair theorem for the simplified case of the uniqueness of the Schwarzschild metric was shown by Werner Israel in 1967. The result was quickly generalized to the cases of charged or spinning black holes. There is still no rigorous mathematical proof of a general no-hair theorem, and mathematicians refer to it as the no-hair conjecture. Even in the case of gravity alone (i.e., zero electric fields), the conjecture has only been partially resolved by results of Stephen Hawking, Brandon Carter, and David C. Robinson, under the additional hypothesis of non-degenerate event horizons and the technical, restrictive and difficult-to-justify assumption of real analyticity of the space-time continuum. Example Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole was made by collapsing ordinary matter whereas the second was made out of antimatter; nevertheless, then the conjecture states they will be completely indistinguishable to an observer outside the event horizon. None of the special particle physics pseudo-charges (i.e., the global charges baryonic number, leptonic number, etc., all of which would be different for the originating masses of matter that created the black holes) are conserved in the black hole, or if they are conserved somehow then their values would be unobservable from the outside. Changing the reference frame Every isolated unstable black hole decays rapidly to a stable black hole; and (excepting quantum fluctuations) stable black holes can be completely described (in a Cartesian coordinate system) at any moment in time by these eleven numbers: mass–energy , electric charge , position (three components), linear momentum (three components), angular momentum (three components). These numbers represent the conserved attributes of an object which can be determined from a distance by examining its gravitational and electromagnetic fields. All other variations in the black hole will either escape to infinity or be swallowed up by the black hole. By changing the reference frame one can set the linear momentum and position to zero and orient the spin angular momentum along the positive z axis. This eliminates eight of the eleven numbers, leaving three which are independent of the reference frame: mass, angular momentum magnitude, and electric charge. Thus any black hole that has been isolated for a significant period of time can be described by the Kerr–Newman metric in an appropriately chosen reference frame. Extensions The no-hair theorem was originally formulated for black holes within the context of a four-dimensional spacetime, obeying the Einstein field equation of general relativity with zero cosmological constant, in the presence of electromagnetic fields, or optionally other fields such as scalar fields and massive vector fields (Proca fields, etc.). It has since been extended to include the case where the cosmological constant is positive (which recent observations are tending to support). Magnetic charge, if detected as predicted by some theories, would form the fourth parameter possessed by a classical black hole. Counterexamples Counterexamples in which the theorem fails are known in spacetime dimensions higher than four; in the presence of non-abelian Yang–Mills fields, non-abelian Proca fields, some non-minimally coupled scalar fields, or skyrmions; or in some theories of gravity other than Einstein's general relativity. However, these exceptions are often unstable solutions and/or do not lead to conserved quantum numbers so that "The 'spirit' of the no-hair conjecture, however, seems to be maintained". It has been proposed that "hairy" black holes may be considered to be bound states of hairless black holes and solitons. In 2004, the exact analytical solution of a (3+1)-dimensional spherically symmetric black hole with minimally coupled self-interacting scalar field was derived. This showed that, apart from mass, electrical charge and angular momentum, black holes can carry a finite scalar charge which might be a result of interaction with cosmological scalar fields such as the inflaton. The solution is stable and does not possess any unphysical properties; however, the existence of a scalar field with the desired properties is only speculative. Observational results The results from the first observation of gravitational waves in 2015 provide some experimental evidence consistent with the uniqueness of the no-hair theorem. This observation is consistent with Stephen Hawking's theoretical work on black holes in the 1970s. Soft hair A study by Sasha Haco, Stephen Hawking, Malcolm Perry and Andrew Strominger postulates that black holes might contain "soft hair", giving the black hole more degrees of freedom than previously thought. This hair permeates at a very low-energy state, which is why it didn't come up in previous calculations that postulated the no-hair theorem. This was the subject of Hawking's final paper which was published posthumously.
Physical sciences
Theory of relativity
Physics
179505
https://en.wikipedia.org/wiki/Physical%20property
Physical property
A physical property is any property of a physical system that is measurable. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables. Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility, viscosity, etc. Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined. Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance. It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quantum structure. Physical properties are contrasted with chemical properties which determine the way a material behaves in a chemical reaction. List of properties The physical properties of an object that are traditionally defined by classical mechanics are often called mechanical properties. Other broad categories, commonly cited, are electrical properties, optical properties, thermal properties, etc. Physical properties include: absorption (physical) absorption (electromagnetic) albedo angular momentum area boiling point brittleness capacitance color concentration density dielectric ductility distribution efficacy elasticity electric charge electrical conductivity electrical impedance electric field electric potential emission flow rate (mass) flow rate (volume) fluidity frequency hardness heat capacity inductance intrinsic impedance intensity irradiance length location luminance luminescence luster malleability magnetic field magnetic flux mass melting point moment momentum opacity permeability permittivity plasticity pressure radiance resistivity reflectivity refractive index solubility specific heat spin strength stiffness temperature tension thermal conductivity (and resistance) velocity viscosity volume wave impedance
Physical sciences
Physics basics: General
Physics
179716
https://en.wikipedia.org/wiki/Uppsala%20General%20Catalogue
Uppsala General Catalogue
The Uppsala General Catalogue of Galaxies (UGC) is a catalogue of 12,921 galaxies visible from the northern hemisphere. It was first published in 1973. The catalogue includes essentially all galaxies north of declination −02°30′ and to a limiting diameter of 1.0 arcminute or to a limiting apparent magnitude of 14.5. The primary source of data is the blue prints of the Palomar Observatory Sky Survey (POSS). It also includes galaxies smaller than 1.0 arcminute in diameter but brighter than 14.5 magnitude from the Catalogue of Galaxies and of Clusters of Galaxies (CGCG). The catalogue contains descriptions of the galaxies and their surrounding areas, plus conventional system classifications and position angles for flattened galaxies. Galaxy diameters are included and the classifications and descriptions are given in such a way as to provide as accurate an account as possible of the appearance of the galaxies on the prints. The accuracy of coordinates is only what is necessary for identifications purposes. The catalogue was edited by Peter Nilson, at the time Doctor of Astronomy and a researcher at Uppsala, who had already published some essays about the history of his science; a couple of years later he left his career as a professional astronomer behind and became a full-time writer, novelist and essayist, although the relationship of humans to space and cosmology, and the history of science, remained powerful themes in his later writing. Addendum There is an addendum to the catalogue called Uppsala General Catalogue Addendum, which is abbreviated as UGCA.
Physical sciences
Surveys and Catalogs
Astronomy
179863
https://en.wikipedia.org/wiki/Climate%20of%20Antarctica
Climate of Antarctica
The climate of Antarctica is the coldest on Earth. The continent is also extremely dry (it is a desert), averaging of precipitation per year. Snow rarely melts on most parts of the continent, and, after being compressed, becomes the glacier ice that makes up the ice sheet. Weather fronts rarely penetrate far into the continent, because of the katabatic winds. Most of Antarctica has an ice-cap climate (Köppen classification EF) with extremely cold and dry weather. Temperature The highest temperature ever recorded on Antarctica was recorded at Signy Research Station, Signy Island on 30 January 1982. The highest temperature on the Antarctic mainland was at the Esperanza Base (Argentina) on 6 February 2020. The lowest air temperature record, the lowest reliably measured temperature on Antarctica was set on 21 July 1983, when a temperature of was observed at Vostok Station. For comparison, this is colder than subliming dry ice (at sea level pressure). The elevation of the location is . Satellite measurements have identified even lower ground temperatures, with having been observed at the cloud-free East Antarctic Plateau on 10 August 2010. The lowest recorded temperature of any location on Earth's surface at was revised with new data in 2018 in nearly 100 locations, ranging from to . This unnamed part of the Antarctic plateau, between Dome A and Dome F, was measured on 10 August 2010, and the temperature was deduced from radiance measured by the Landsat 8 and other satellites. It was discovered during a National Snow and Ice Data Center review of stored data in December 2013 but revised by researchers on 25 June 2018. This temperature is not directly comparable to the –89.2 °C reading quoted above, since it is a skin temperature deduced from satellite-measured upwelling radiance, rather than a thermometer-measured temperature of the air above the ground surface. The mean annual temperature of the interior is . The coast is warmer; on the coast Antarctic average temperatures are around (in the warmest parts of Antarctica) and in the elevated inland they average about in Vostok. Monthly means at McMurdo Station range from in August to in January. At the South Pole, the highest temperature ever recorded was on 25 December 2011. Along the Antarctic Peninsula, temperatures as high as have been recorded, though the summer temperature is below most of the time. Severe low temperatures vary with latitude, elevation, and distance from the ocean. East Antarctica is colder than West Antarctica because of its higher elevation. The Antarctic Peninsula has the most moderate climate. Higher temperatures occur in January along the coast and average slightly below freezing. Precipitation The total precipitation on Antarctica, averaged over the entire continent, is about per year (Vaughan et al., J. Clim., 1999). The actual rates vary widely, from high values over the Peninsula ( a year) to very low values (as little as in the high interior (Bromwich, Reviews of Geophysics, 1988). Areas that receive less than of precipitation per year are classified as deserts. Almost all Antarctic precipitation falls as snow. Rainfall is rare and mainly occurs during the summer in coastal areas and surrounding islands. Note that the quoted precipitation is a measure of its equivalence to water, rather than being the actual depth of snow. The air in Antarctica is also very dry. The low temperatures result in a very low absolute humidity, which means that dry skin and cracked lips are a continual problem for scientists and expeditioners working on the continent. Weather condition classification The weather in Antarctica can be highly variable, and the weather conditions can often change dramatically in short periods of time. There are various classifications for describing weather conditions in Antarctica; restrictions given to workers during the different conditions vary by station and nation. Ice cover Nearly all of Antarctica is covered by a sheet of ice that is, on average, at least thick. Antarctica contains 90% of the world's ice and more than 70% of its fresh water. If all the land-ice covering Antarctica were to melt — around of ice — the seas would rise by over . The Antarctic is so cold that even with increases of a few degrees, temperatures would generally remain below the melting point of ice. Higher temperatures are expected to lead to more precipitation, which takes the form of snow. This would increase the amount of ice in Antarctica, offsetting approximately one third of the expected sea level rise from thermal expansion of the oceans. During a recent decade, East Antarctica thickened at an average rate of about per year while West Antarctica showed an overall thinning of per year. For the contribution of Antarctica to present and future sea level change, see sea level rise. Because ice flows, albeit slowly, the ice within the ice sheet is younger than the age of the sheet itself. 1 The total ice volume is different from the sum of the component parts because individual figures have been rounded. Ice shelves About 75% of the coastline of Antarctica is ice shelf. The majority of ice shelf consists of floating ice, and a lesser amount consists of glaciers that move slowly from the land mass into the sea. Ice shelves lose mass through breakup of glacial ice (calving), or basal melting due to warm ocean water under the ice. Melting or breakup of floating shelf ice does not directly affect global sea levels; however, ice shelves have a buttressing effect on the ice flow behind them. If ice shelves break up, the ice flow behind them may accelerate, resulting in increasing melt of the Antarctic ice sheet and an increasing contribution to sea level rise. Known changes in coastline ice around the Antarctic Peninsula: 1936–1989: Wordie Ice Shelf significantly reduced in size. 1995: Ice in the Prince Gustav Channel disintegrated. Parts of the Larsen Ice Shelf broke up in recent decades. 1995: The Larsen A ice shelf disintegrated in January 1995. 2001: of the Larsen B ice shelf disintegrated in February 2001. It had been gradually retreating before the breakup event. 2015: A study concluded that the remaining Larsen B ice-shelf will disintegrate by the end of the decade, based on observations of faster flow and rapid thinning of glaciers in the area. The George VI Ice Shelf, which may be on the brink of instability, has probably existed for approximately 8,000 years, after melting 1,500 years earlier. Warm ocean currents may have been the cause of the melting. Not only are the ice sheets losing mass, they are losing mass at an accelerating rate. Climate change
Physical sciences
Climates
Earth science
179919
https://en.wikipedia.org/wiki/Solenoid
Solenoid
A solenoid () is a type of electromagnet formed by a helical coil of wire whose length is substantially greater than its diameter, which generates a controlled magnetic field. The coil can produce a uniform magnetic field in a volume of space when an electric current is passed through it. André-Marie Ampère coined the term solenoid in 1823, having conceived of the device in 1820. The French term originally created by Ampère is solénoïde, which is a French transliteration of the Greek word σωληνοειδὴς which means tubular. The helical coil of a solenoid does not necessarily need to revolve around a straight-line axis; for example, William Sturgeon's electromagnet of 1824 consisted of a solenoid bent into a horseshoe shape (similarly to an arc spring). Solenoids provide magnetic focusing of electrons in vacuums, notably in television camera tubes such as vidicons and image orthicons. Electrons take helical paths within the magnetic field. These solenoids, focus coils, surround nearly the whole length of the tube. Physics Infinite continuous solenoid An infinite solenoid has infinite length but finite diameter. "Continuous" means that the solenoid is not formed by discrete finite-width coils but by many infinitely thin coils with no space between them; in this abstraction, the solenoid is often viewed as a cylindrical sheet of conductive material. The magnetic field inside an infinitely long solenoid is homogeneous and its strength neither depends on the distance from the axis nor on the solenoid's cross-sectional area. This is a derivation of the magnetic flux density around a solenoid that is long enough so that fringe effects can be ignored. In Figure 1, we immediately know that the flux density vector points in the positive z direction inside the solenoid, and in the negative z direction outside the solenoid. We confirm this by applying the right hand grip rule for the field around a wire. If we wrap our right hand around a wire with the thumb pointing in the direction of the current, the curl of the fingers shows how the field behaves. Since we are dealing with a long solenoid, all of the components of the magnetic field not pointing upwards cancel out by symmetry. Outside, a similar cancellation occurs, and the field is only pointing downwards. Now consider the imaginary loop c that is located inside the solenoid. By Ampère's law, we know that the line integral of B (the magnetic flux density vector) around this loop is zero, since it encloses no electrical currents (it can be also assumed that the circuital electric field passing through the loop is constant under such conditions: a constant or constantly changing current through the solenoid). We have shown above that the field is pointing upwards inside the solenoid, so the horizontal portions of loop c do not contribute anything to the integral. Thus the integral of the up side 1 is equal to the integral of the down side 2. Since we can arbitrarily change the dimensions of the loop and get the same result, the only physical explanation is that the integrands are actually equal, that is, the magnetic field inside the solenoid is radially uniform. Note, though, that nothing prohibits it from varying longitudinally, which in fact, it does. A similar argument can be applied to the loop a to conclude that the field outside the solenoid is radially uniform or constant. This last result, which holds strictly true only near the center of the solenoid where the field lines are parallel to its length, is important as it shows that the flux density outside is practically zero since the radii of the field outside the solenoid will tend to infinity. An intuitive argument can also be used to show that the flux density outside the solenoid is actually zero. Magnetic field lines only exist as loops, they cannot diverge from or converge to a point like electric field lines can (see Gauss's law for magnetism). The magnetic field lines follow the longitudinal path of the solenoid inside, so they must go in the opposite direction outside of the solenoid so that the lines can form loops. However, the volume outside the solenoid is much greater than the volume inside, so the density of magnetic field lines outside is greatly reduced. Now recall that the field outside is constant. In order for the total number of field lines to be conserved, the field outside must go to zero as the solenoid gets longer. Of course, if the solenoid is constructed as a wire spiral (as often done in practice), then it emanates an outside field the same way as a single wire, due to the current flowing overall down the length of the solenoid. Applying Ampère's circuital law to the solenoid (see figure on the right) gives us where is the magnetic flux density, is the length of the solenoid, is the magnetic constant, the number of turns, and the current. From this we get This equation is valid for a solenoid in free space, which means the permeability of the magnetic path is the same as permeability of free space, μ0. If the solenoid is immersed in a material with relative permeability μr, then the field is increased by that amount: In most solenoids, the solenoid is not immersed in a higher permeability material, but rather some portion of the space around the solenoid has the higher permeability material and some is just air (which behaves much like free space). In that scenario, the full effect of the high permeability material is not seen, but there will be an effective (or apparent) permeability μeff such that 1 ≤ μeff ≤ μr. The inclusion of a ferromagnetic core, such as iron, increases the magnitude of the magnetic flux density in the solenoid and raises the effective permeability of the magnetic path. This is expressed by the formula where μeff is the effective or apparent permeability of the core. The effective permeability is a function of the geometric properties of the core and its relative permeability. The terms relative permeability (a property of just the material) and effective permeability (a property of the whole structure) are often confused; they can differ by many orders of magnitude. For an open magnetic structure, the relationship between the effective permeability and relative permeability is given as follows: where k is the demagnetization factor of the core. Finite continuous solenoid A finite solenoid is a solenoid with finite length. Continuous means that the solenoid is not formed by discrete coils but by a sheet of conductive material. We assume the current is uniformly distributed on the surface of the solenoid, with a surface current density K; in cylindrical coordinates: The magnetic field can be found using the vector potential, which for a finite solenoid with radius R and length l in cylindrical coordinates is Where: , , , , , . Here, , , and are complete elliptic integrals of the first, second, and third kind. Using: The magnetic flux density is obtained as On the symmetry axis, the radial component vanishes, and the axial field component is Inside the solenoid, far away from the ends (), this tends towards the constant value . Short solenoid estimate For the case in which the radius is much larger than the length of the solenoid (), the magnetic flux density through the centre of the solenoid (in the z direction, parallel to the solenoid's length, where the coil is centered at z=0) can be estimated as the flux density of a single circular conductor loop: Irregular solenoids Within the category of finite solenoids, there are those that are sparsely wound with a single pitch, those that are sparsely wound with varying pitches (varied-pitch solenoid), and those with varying radii for different loops (non-cylindrical solenoids). They are called irregular solenoids. They have found applications in different areas, such as sparsely wound solenoids for wireless power transfer, varied-pitch solenoids for magnetic resonance imaging (MRI), and non-cylindrical solenoids for other medical devices. The calculation of the intrinsic inductance and capacitance cannot be done using those for the conventional solenoids, i.e. the tightly wound ones. New calculation methods were proposed for the calculation of intrinsic inductance(codes available at ) and capacitance. (codes available at ) Inductance As shown above, the magnetic flux density within the coil is practically constant and given by where μ0 is the magnetic constant, the number of turns, the current and the length of the coil. Ignoring end effects, the total magnetic flux through the coil is obtained by multiplying the flux density by the cross-section area : Combining this with the definition of inductance the inductance of a solenoid follows as A table of inductance for short solenoids of various diameter to length ratios has been calculated by Dellinger, Whittmore, and Ould. This, and the inductance of more complicated shapes, can be derived from Maxwell's equations. For rigid air-core coils, inductance is a function of coil geometry and number of turns, and is independent of current. Similar analysis applies to a solenoid with a magnetic core, but only if the length of the coil is much greater than the product of the relative permeability of the magnetic core and the diameter. That limits the simple analysis to low-permeability cores, or extremely long thin solenoids. The presence of a core can be taken into account in the above equations by replacing the magnetic constant μ0 with μ or μ0μr, where μ represents permeability and μr relative permeability. Note that since the permeability of ferromagnetic materials changes with applied magnetic flux, the inductance of a coil with a ferromagnetic core will generally vary with current.
Technology
Components
null
180090
https://en.wikipedia.org/wiki/Jansky
Jansky
The jansky (symbol Jy, plural janskys) is a non-SI unit of spectral flux density, or spectral irradiance, used especially in radio astronomy. It is equivalent to 10−26 watts per square metre per hertz. The spectral flux density or monochromatic flux, , of a source is the integral of the spectral radiance, , over the source solid angle: The unit is named after pioneering US radio astronomer Karl Guthe Jansky and is defined as Since the jansky is obtained by integrating over the whole source solid angle, it is most simply used to describe point sources; for example, the Third Cambridge Catalogue of Radio Sources (3C) reports results in janskys. For extended sources, the surface brightness is often described with units of janskys per solid angle; for example, far-infrared (FIR) maps from the IRAS satellite are in megajanskys per steradian (MJy⋅sr−1). Although extended sources at all wavelengths can be reported with these units, for radio-frequency maps, extended sources have traditionally been described in terms of a brightness temperature; for example the Haslam et al. 408 MHz all-sky continuum survey is reported in terms of a brightness temperature in kelvin. Unit conversions Jansky units are not a standard SI unit, so it may be necessary to convert the measurements made in the unit to the SI equivalent in terms of watts per square metre per hertz (W·m−2·Hz−1). However, other unit conversions are possible with respect to measuring this unit. AB magnitude The flux density in janskys can be converted to a magnitude basis, for suitable assumptions about the spectrum. For instance, converting an AB magnitude to a flux density in microjanskys is straightforward: dBW·m−2·Hz−1 The linear flux density in janskys can be converted to a decibel basis, suitable for use in fields of telecommunication and radio engineering. 1 jansky is equal to −260 dBW·m−2·Hz−1, or −230 dBm·m−2·Hz−1: Temperature units The spectral radiance in janskys per steradian can be converted to a brightness temperature, useful in radio and microwave astronomy. Starting with Planck's law, we see This can be solved for temperature, giving In the low-frequency, high-temperature regime, when , we can use the asymptotic expression: A less accurate form is which can be derived from the Rayleigh–Jeans law Usage The flux to which the jansky refers can be in any form of radiant energy. It was created for and is still most frequently used in reference to electromagnetic energy, especially in the context of radio astronomy. The brightest astronomical radio sources have flux densities of the order of 1–100 janskys. For example, the Third Cambridge Catalogue of Radio Sources lists some 300 to 400 radio sources in the Northern Hemisphere brighter than 9 Jy at 159 MHz. This range makes the jansky a suitable unit for radio astronomy. Gravitational waves also carry energy, so their flux density can also be expressed in terms of janskys. Typical signals on Earth are expected to be 1020 Jy or more. However, because of the poor coupling of gravitational waves to matter, such signals are difficult to detect. When measuring broadband continuum emissions, where the energy is roughly evenly distributed across the detector bandwidth, the detected signal will increase in proportion to the bandwidth of the detector (as opposed to signals with bandwidth narrower than the detector bandpass). To calculate the flux density in janskys, the total power detected (in watts) is divided by the receiver collecting area (in square meters), and then divided by the detector bandwidth (in hertz). The flux density of astronomical sources is many orders of magnitude below 1 W·m−2·Hz−1, so the result is multiplied by 1026 to get a more appropriate unit for natural astrophysical phenomena. The millijansky, mJy, was sometimes referred to as a milli-flux unit (mfu) in older astronomical literature. Orders of magnitude Note: Unless noted, all values are as seen from the Earth's surface.
Physical sciences
Specific intensity
Basics and measurement
180121
https://en.wikipedia.org/wiki/Medication
Medication
A medication (also called medicament, medicine, pharmaceutical drug, medicinal product, medicinal drug or simply drug) is a drug used to diagnose, cure, treat, or prevent disease. Drug therapy (pharmacotherapy) is an important part of the medical field and relies on the science of pharmacology for continual advancement and on pharmacy for appropriate management. Drugs are classified in many ways. One of the key divisions is by level of control, which distinguishes prescription drugs (those that a pharmacist dispenses only on the medical prescription) from over-the-counter drugs (those that consumers can order for themselves). Medicines may be classified by mode of action, route of administration, biological system affected, or therapeutic effects. The World Health Organization keeps a list of essential medicines. Drug discovery and drug development are complex and expensive endeavors undertaken by pharmaceutical companies, academic scientists, and governments. As a result of this complex path from discovery to commercialization, partnering has become a standard practice for advancing drug candidates through development pipelines. Governments generally regulate what drugs can be marketed, how drugs are marketed, and in some jurisdictions, drug pricing. Controversies have arisen over drug pricing and disposal of used medications. Definition Medication is a medicine or a chemical compound used to treat or cure illness. According to Encyclopædia Britannica, medication is "a substance used in treating a disease or relieving pain". As defined by the National Cancer Institute, dosage forms of medication can include tablets, capsules, liquids, creams, and patches. Medications can be administered in different ways, such as by mouth, by infusion into a vein, or by drops put into the ear or eye. A medication that does not contain an active ingredient and is used in research studies is called a placebo. In Europe, the term is "medicinal product", and it is defined by EU law as: "Any substance or combination of substances presented as having properties for treating or preventing disease in human beings; or" "Any substance or combination of substances which may be used in or administered to human beings either with a view to restoring, correcting, or modifying physiological functions by exerting a pharmacological, immunological or metabolic action or to making a medical diagnosis." In the US, a "drug" is: A substance (other than food) intended to affect the structure or any function of the body. A substance intended for use as a component of a medicine but not a device or a component, part, or accessory of a device. A substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. A substance recognized by an official pharmacopeia or formulary. Biological products are included within this definition and are generally covered by the same laws and regulations, but differences exist regarding their manufacturing processes (chemical process versus biological process). Usage Drug use among elderly Americans has been studied; in a group of 2,377 people with an average age of 71 surveyed between 2005 and 2006, 84% took at least one prescription drug, 44% took at least one over-the-counter (OTC) drug, and 52% took at least one dietary supplement; in a group of 2245 elderly Americans (average age of 71) surveyed over the period 2010 – 2011, those percentages were 88%, 38%, and 64%. Classification One of the key classifications is between traditional small molecule drugs; usually derived from chemical synthesis and biological medical products; which include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, and cell therapy (for instance, stem cell therapies). Pharmaceuticals or drugs or medicines are classified into various other groups besides their origin on the basis of pharmacological properties like mode of action and their pharmacological action or activity, such as by chemical properties, mode or route of administration, biological system affected, or therapeutic effects. An elaborate and widely used classification system is the Anatomical Therapeutic Chemical Classification System (ATC system). The World Health Organization keeps a list of essential medicines. A sampling of classes of medicine includes: Antipyretics: reducing fever (pyrexia/pyresis) Analgesics: reducing pain (painkillers) Antimalarial drugs: treating malaria Antibiotics: inhibiting germ growth Antiseptics: prevention of germ growth near burns, cuts,and wounds Mood stabilizers: lithium and valproate Hormone replacements: Premarin Oral contraceptives: Enovid, "biphasic" pill, and "triphasic" pill Stimulants: methylphenidate, amphetamine Tranquilizers: meprobamate, chlorpromazine, reserpine, chlordiazepoxide, diazepam, and alprazolam Statins: lovastatin, pravastatin, and simvastatin Pharmaceuticals may also be described as "specialty", independent of other classifications, which is an ill-defined class of drugs that might be difficult to administer, require special handling during administration, require patient monitoring during and immediately after administration, have particular regulatory requirements restricting their use, and are generally expensive relative to other drugs. Types of medicines For the digestive system Lower digestive tract: laxatives, antispasmodics, antidiarrhoeals, bile acid sequestrants, opioids. Upper digestive tract: antacids, reflux suppressants, antiflatulents, antidopaminergics, proton pump inhibitors (PPIs), H2-receptor antagonists, cytoprotectants, prostaglandin analogues. For the cardiovascular system Affecting blood pressure/(antihypertensive drugs): ACE inhibitors, angiotensin receptor blockers, beta-blockers, α blockers, calcium channel blockers, thiazide diuretics, loop diuretics, aldosterone inhibitors. Coagulation: anticoagulants, heparin, antiplatelet drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs. General: β-receptor blockers ("beta blockers"), calcium channel blockers, diuretics, cardiac glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, vasodilators. HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol inhibitors: hypolipidaemic agents. For the central nervous system Drugs affecting the central nervous system include psychedelics, hypnotics, anaesthetics, antipsychotics, eugeroics, antidepressants (including tricyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake inhibitors (SSRIs)), antiemetics, anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement disorder (e.g., Parkinson's disease) drugs, nootropics, stimulants (including amphetamines), benzodiazepines, cyclopyrrolones, dopamine antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT (serotonin) antagonists. For pain The main classes of painkillers are NSAIDs, opioids, and local anesthetics. For consciousness (anesthetic drugs) Some anesthetics include benzodiazepines and barbiturates. For musculoskeletal disorders The main categories of drugs for musculoskeletal disorders are: NSAIDs (including COX-2 selective inhibitors), muscle relaxants, neuromuscular drugs, and anticholinesterases. For the eye Anti-allergy: mast cell inhibitors. Anti-fungal: imidazoles, polyenes. Anti-glaucoma: adrenergic agonists, beta-blockers, carbonic anhydrase inhibitors/hyperosmotics, cholinergics, miotics, parasympathomimetics, prostaglandin agonists/prostaglandin inhibitors, nitroglycerin. Anti-inflammatory: NSAIDs, corticosteroids. Antibacterial: antibiotics, topical antibiotics, sulfa drugs, aminoglycosides, fluoroquinolones. Antiviral drugs. Diagnostic: topical anesthetics, sympathomimetics, parasympatholytics, mydriatics, cycloplegics. General: adrenergic neurone blocker, astringent. For the ear, nose, and oropharynx Antibiotics, sympathomimetics, antihistamines, anticholinergics, NSAIDs, corticosteroids, antiseptics, local anesthetics, antifungals, and cerumenolytics. For the respiratory system Bronchodilators, antitussives, mucolytics, decongestants, inhaled and systemic corticosteroids, beta2-adrenergic agonists, anticholinergics, mast cell stabilizers, leukotriene antagonists. For endocrine problems Androgens, antiandrogens, estrogens, gonadotropin, corticosteroids, human growth hormone, insulin, antidiabetics (sulfonylureas, biguanides/metformin, thiazolidinediones, insulin), thyroid hormones, antithyroid drugs, calcitonin, diphosphonate, vasopressin analogues. For the reproductive system or urinary system Antifungal, alkalinizing agents, quinolones, antibiotics, cholinergics, anticholinergics, antispasmodics, 5-alpha reductase inhibitor, selective alpha-1 blockers, sildenafils, fertility medications. For contraception Hormonal contraception. Ormeloxifene. Spermicide. For obstetrics and gynecology NSAIDs, anticholinergics, haemostatic drugs, antifibrinolytics, Hormone Replacement Therapy (HRT), bone regulators, beta-receptor agonists, follicle stimulating hormone, luteinising hormone, LHRH, gamolenic acid, gonadotropin release inhibitor, progestogen, dopamine agonists, oestrogen, prostaglandins, gonadorelin, clomiphene, tamoxifen, diethylstilbestrol. For the skin Emollients, anti-pruritics, antifungals, antiseptics, scabicides, pediculicides, tar products, vitamin A derivatives, vitamin D analogues, keratolytics, abrasives, systemic antibiotics, topical antibiotics, hormones, desloughing agents, exudate absorbents, fibrinolytics, proteolytics, sunscreens, antiperspirants, corticosteroids, immune modulators. For infections and infestations Antibiotics, antifungals, antileprotics, antituberculous drugs, antimalarials, anthelmintics, amoebicides, antivirals, antiprotozoals, probiotics, prebiotics, antitoxins, and antivenoms. For the immune system Vaccines, immunoglobulins, immunosuppressants, interferons, and monoclonal antibodies. For allergic disorders Anti-allergics, antihistamines, NSAIDs, corticosteroids. For nutrition Tonics, electrolytes and mineral preparations (including iron preparations and magnesium preparations), parenteral nutrition, vitamins, anti-obesity drugs, anabolic drugs, haematopoietic drugs, food product drugs. For neoplastic disorders Cytotoxic drugs, therapeutic antibodies, sex hormones, aromatase inhibitors, somatostatin inhibitors, recombinant interleukins, G-CSF, erythropoietin. For diagnostics Contrast media. For euthanasia A euthanaticum is used for euthanasia and physician-assisted suicide. Euthanasia is not permitted by law in many countries, and consequently, medicines will not be licensed for this use in those countries. Administration A single drug may contain single or multiple active ingredients. The administration is the process by which a patient takes medicine. There are three major categories of drug administration: enteral (via the human gastrointestinal tract), injection into the body, and by other routes (dermal, nasal, ophthalmic, otologic, and urogenital). Oral administration, the most common form of enteral administration, can be performed using various dosage forms including tablets or capsules and liquid such as syrup or suspension. Other ways to take the medication include buccally (placed inside the cheek), sublingually (placed underneath the tongue), eye and ear drops (dropped into the eye or ear), and transdermally (applied to the skin). They can be administered in one dose, as a bolus. Administration frequencies are often abbreviated from Latin, such as every 8 hours reading Q8H from Quaque VIII Hora. The drug frequencies are often expressed as the number of times a drug is used per day (e.g., four times a day). It may include event-related information (e.g., 1 hour before meals, in the morning, at bedtime), or complimentary to an interval, although equivalent expressions may have different implications (e.g., every 8 hours versus 3 times a day). Drug discovery In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new drugs are discovered. Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery. Later chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that have a desirable therapeutic effect in a process known as classical pharmacology. Since sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compound libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy. Even more recently, scientists have been able to understand the shape of biological molecules at the atomic level and to use that knowledge to design (see drug design) drug candidates. Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, it will begin the process of drug development prior to clinical trials. One or more of these steps may, but not necessarily, involve computer-aided drug design. Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with a low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity (NME) was approximately US$1.8 billion. Drug discovery is done by pharmaceutical companies, sometimes with research assistance from universities. The "final product" of drug discovery is a patent on the potential drug. The drug requires very expensive Phase I, II, and III clinical trials, and most of them fail. Small companies have a critical role, often then selling the rights to larger companies that have the resources to run the clinical trials. Drug discovery is different from Drug Development. Drug Discovery is often considered the process of identifying new medicine. At the same time, Drug development is delivering a new drug molecule into clinical practice. In its broad definition, this encompasses all steps from the basic research process of finding a suitable molecular target to supporting the drug's commercial launch. Development Drug development is the process of bringing a new drug to the market once a lead compound has been identified through the process of drug discovery. It includes pre-clinical research (microorganisms/animals) and clinical trials (on humans) and may include the step of obtaining regulatory approval to market the drug. Drug Development Process Discovery: The Drug Development process starts with Discovery, a process of identifying a new medicine. Development: Chemicals extracted from natural products are used to make pills, capsules, or syrups for oral use. Injections for direct infusion into the blood drops for eyes or ears. Preclinical research: Drugs go under laboratory or animal testing, to ensure that they can be used on Humans. Clinical testing: The drug is used on people to confirm that it is safe to use. FDA Review: drug is sent to FDA before launching the drug into the market. FDA post-Market Review: The drug is reviewed and monitored by FDA for the safety once it is available to the public. Regulation The regulation of drugs varies by jurisdiction. In some countries, such as the United States, they are regulated at the national level by a single agency. In other jurisdictions, they are regulated at the state level, or at both state and national levels by various bodies, as is the case in Australia. The role of therapeutic goods regulation is designed mainly to protect the health and safety of the population. Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be registered before they are allowed to be marketed. There is usually some degree of restriction on the availability of certain therapeutic goods depending on their risk to consumers. Depending upon the jurisdiction, drugs may be divided into over-the-counter drugs (OTC) which may be available without special restrictions, and prescription drugs, which must be prescribed by a licensed medical practitioner in accordance with medical guidelines due to the risk of adverse effects and contraindications. The precise distinction between OTC and prescription depends on the legal jurisdiction. A third category, "behind-the-counter" drugs, is implemented in some jurisdictions. These do not require a prescription, but must be kept in the dispensary, not visible to the public, and be sold only by a pharmacist or pharmacy technician. Doctors may also prescribe prescription drugs for off-label use – purposes which the drugs were not originally approved for by the regulatory agency. The Classification of Pharmaco-Therapeutic Referrals helps guide the referral process between pharmacists and doctors. The International Narcotics Control Board of the United Nations imposes a world law of prohibition of certain drugs. They publish a lengthy list of chemicals and plants whose trade and consumption (where applicable) are forbidden. OTC drugs are sold without restriction as they are considered safe enough that most people will not hurt themselves accidentally by taking it as instructed. Many countries, such as the United Kingdom have a third category of "pharmacy medicines", which can be sold only in registered pharmacies by or under the supervision of a pharmacist. Medical errors include over-prescription and polypharmacy, mis-prescription, contraindication and lack of detail in dosage and administration instructions. In 2000 the definition of a prescription error was studied using a Delphi method conference; the conference was motivated by ambiguity in what a prescription error is and a need to use a uniform definition in studies. Drug pricing In many jurisdictions, drug prices are regulated. United Kingdom In the UK, the Pharmaceutical Price Regulation Scheme is intended to ensure that the National Health Service is able to purchase drugs at reasonable prices. The prices are negotiated between the Department of Health, acting with the authority of Northern Ireland and the UK Government, and the representatives of the Pharmaceutical industry brands, the Association of the British Pharmaceutical Industry (ABPI). For 2017 this payment percentage set by the PPRS will be 4,75%. Canada In Canada, the Patented Medicine Prices Review Board examines drug pricing and determines if a price is excessive or not. In these circumstances, drug manufacturers must submit a proposed price to the appropriate regulatory agency. Furthermore, "the International Therapeutic Class Comparison Test is responsible for comparing the National Average Transaction Price of the patented drug product under review" different countries that the prices are being compared to are the following: France, Germany, Italy, Sweden, Switzerland, the United Kingdom, and the United States Brazil In Brazil, the prices are regulated through legislation under the name of Medicamento Genérico (generic drugs) since 1999. India In India, drug prices are regulated by the National Pharmaceutical Pricing Authority. United States In the United States, drug costs are partially unregulated, but instead are the result of negotiations between drug companies and insurance companies. High prices have been attributed to monopolies given to manufacturers by the government. New drug development costs continue to rise as well. Despite the enormous advances in science and technology, the number of new blockbuster drugs approved by the government per billion dollars spent has halved every 9 years since 1950. Blockbuster drug A blockbuster drug is a drug that generates more than $1 billion in revenue for a pharmaceutical company in a single year. Cimetidine was the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug. History Prescription drug history Antibiotics first arrived on the medical scene in 1932 thanks to Gerhard Domagk; and were coined the "wonder drugs". The introduction of the sulfa drugs led to the mortality rate from pneumonia in the U.S. to drop from 0.2% each year to 0.05% (, as much) by 1939. Antibiotics inhibit the growth or the metabolic activities of bacteria and other microorganisms by a chemical substance of microbial origin. Penicillin, introduced a few years later, provided a broader spectrum of activity compared to sulfa drugs and reduced side effects. Streptomycin, found in 1942, proved to be the first drug effective against the cause of tuberculosis and also came to be the best known of a long series of important antibiotics. A second generation of antibiotics was introduced in the 1940s: aureomycin and chloramphenicol. Aureomycin was the best known of the second generation. Lithium was discovered in the 19th century for nervous disorders and its possible mood-stabilizing or prophylactic effect; it was cheap and easily produced. As lithium fell out of favor in France, valpromide came into play. This antibiotic was the origin of the drug that eventually created the mood stabilizer category. Valpromide had distinct psychotrophic effects that were of benefit in both the treatment of acute manic states and in the maintenance treatment of manic depression illness. Psychotropics can either be sedative or stimulant; sedatives aim at damping down the extremes of behavior. Stimulants aim at restoring normality by increasing tone. Soon arose the notion of a tranquilizer which was quite different from any sedative or stimulant. The term tranquilizer took over the notions of sedatives and became the dominant term in the West through the 1980s. In Japan, during this time, the term tranquilizer produced the notion of a psyche-stabilizer and the term mood stabilizer vanished. Premarin (conjugated estrogens, introduced in 1942) and Prempro (a combination estrogen-progestin pill, introduced in 1995) dominated the hormone replacement therapy (HRT) during the 1990s. HRT is not a life-saving drug, nor does it cure any disease. HRT has been prescribed to improve one's quality of life. Doctors prescribe estrogen for their older female patients both to treat short-term menopausal symptoms and to prevent long-term diseases. In the 1960s and early 1970s, more and more physicians began to prescribe estrogen for their female patients. Between 1991 and 1999, Premarin was listed as the most popular prescription and best-selling drug in America. The first oral contraceptive, Enovid, was approved by FDA in 1960. Oral contraceptives inhibit ovulation and so prevent conception. Enovid was known to be much more effective than alternatives including the condom and the diaphragm. As early as 1960, oral contraceptives were available in several different strengths by every manufacturer. In the 1980s and 1990s, an increasing number of options arose including, most recently, a new delivery system for the oral contraceptive via a transdermal patch. In 1982, a new version of "the pill" was introduced, known as the biphasic pill. By 1985, a new triphasic pill was approved. Physicians began to think of "the pill" as an excellent means of birth control for young women. Stimulants such as Ritalin (methylphenidate) came to be pervasive tools for behavior management and modification in young children. Ritalin was first marketed in 1955 for narcolepsy; its potential users were middle-aged and the elderly. It was not until some time in the 1980s along with hyperactivity in children that Ritalin came onto the market. Medical use of methylphenidate is predominantly for symptoms of attention deficit hyperactivity disorder (ADHD). Consumption of methylphenidate in the U.S. out-paced all other countries between 1991 and 1999. Significant growth in consumption was also evident in Canada, New Zealand, Australia, and Norway. Currently, 85% of the world's methylphenidate is consumed in America. The first minor tranquilizer was meprobamate. Only fourteen months after it was made available, meprobamate had become the country's largest-selling prescription drug. By 1957, meprobamate had become the fastest-growing drug in history. The popularity of meprobamate paved the way for Librium and Valium, two minor tranquilizers that belonged to a new chemical class of drugs called the benzodiazepines. These were drugs that worked chiefly as anti-anxiety agents and muscle relaxants. The first benzodiazepine was Librium. Three months after it was approved, Librium had become the most prescribed tranquilizer in the nation. Three years later, Valium hit the shelves and was ten times more effective as a muscle relaxant and anti-convulsant. Valium was the most versatile of the minor tranquilizers. Later came the widespread adoption of major tranquilizers such as chlorpromazine and the drug reserpine. In 1970, sales began to decline for Valium and Librium, but sales of new and improved tranquilizers, such as Xanax, introduced in 1981 for the newly created diagnosis of panic disorder, soared. Mevacor (lovastatin) is the first and most influential statin in the American market. The 1991 launch of Pravachol (pravastatin), the second available in the United States, and the release of Zocor (simvastatin) made Mevacor no longer the only statin on the market. In 1998, Viagra was released as a treatment for erectile dysfunction. Ancient pharmacology Using plants and plant substances to treat all kinds of diseases and medical conditions is believed to date back to prehistoric medicine. The Kahun Gynaecological Papyrus, the oldest known medical text of any kind, dates to about 1800 BC and represents the first documented use of any kind of drug. It and other medical papyri describe Ancient Egyptian medical practices, such as using honey to treat infections and the legs of bee-eaters to treat neck pains. Ancient Babylonian medicine demonstrated the use of medication in the first half of the 2nd millennium BC. Medicinal creams and pills were employed as treatments. On the Indian subcontinent, the Atharvaveda, a sacred text of Hinduism whose core dates from the second millennium BC, although the hymns recorded in it are believed to be older, is the first Indic text dealing with medicine. It describes plant-based drugs to counter diseases. The earliest foundations of ayurveda were built on a synthesis of selected ancient herbal practices, together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 400 BC onwards. The student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The Hippocratic Oath for physicians, attributed to fifth century BC Greece, refers to the existence of "deadly drugs", and ancient Greek physicians imported drugs from Egypt and elsewhere. The pharmacopoeia , written between 50 and 70 CE by the Greek physician Pedanius Dioscorides, was widely read for more than 1,500 years. Medieval pharmacology Al-Kindi's ninth century AD book, De Gradibus and Ibn Sina (Avicenna)'s The Canon of Medicine, covers a range of drugs known to the practice of medicine in the medieval Islamic world. Medieval medicine of Western Europe saw advances in surgery compared to previously, but few truly effective drugs existed, beyond opium (found in such extremely popular drugs as the "Great Rest" of the Antidotarium Nicolai at the time) and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Theodoric Borgognoni, (1205–1296), one of the most significant surgeons of the medieval period, responsible for introducing and promoting important surgical advances including basic antiseptic practice and the use of anaesthetics. Garcia de Orta described some herbal treatments that were used. Modern pharmacology For most of the 19th century, drugs were not highly effective, leading Oliver Wendell Holmes Sr. to famously comment in 1842 that "if all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes". During the First World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. In the inter-war period, the first anti-bacterial agents such as the sulpha antibiotics were developed. The Second World War saw the introduction of widespread and effective antimicrobial therapy with the development and mass production of penicillin antibiotics, made possible by the pressures of the war and the collaboration of British scientists with the American pharmaceutical industry. Medicines commonly used by the late 1920s included aspirin, codeine, and morphine for pain; digitalis, nitroglycerin, and quinine for heart disorders, and insulin for diabetes. Other drugs included antitoxins, a few biological vaccines, and a few synthetic drugs. In the 1930s, antibiotics emerged: first sulfa drugs, then penicillin and other antibiotics. Drugs increasingly became "the center of medical practice". In the 1950s, other drugs emerged including corticosteroids for inflammation, rauvolfia alkaloids as tranquilizers and antihypertensives, antihistamines for nasal allergies, xanthines for asthma, and typical antipsychotics for psychosis. As of 2007, thousands of approved drugs have been developed. Increasingly, biotechnology is used to discover biopharmaceuticals. Recently, multi-disciplinary approaches have yielded a wealth of new data on the development of novel antibiotics and antibacterials and on the use of biological agents for antibacterial therapy. In the 1950s, new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. Governments have been heavily involved in the regulation of drug development and drug sales. In the U.S., the Elixir Sulfanilamide disaster led to the establishment of the Food and Drug Administration, and the 1938 Federal Food, Drug, and Cosmetic Act required manufacturers to file new drugs with the FDA. The 1951 Humphrey-Durham Amendment required certain drugs to be sold by prescription. In 1962, a subsequent amendment required new drugs to be tested for efficacy and safety in clinical trials. Until the 1970s, drug prices were not a major concern for doctors and patients. As more drugs became prescribed for chronic illnesses, however, costs became burdensome, and by the 1970s nearly every U.S. state required or encouraged the substitution of generic drugs for higher-priced brand names. This also led to the 2006 U.S. law, Medicare Part D, which offers Medicare coverage for drugs. As of 2008, the United States is the leader in medical research, including pharmaceutical development. U.S. drug prices are among the highest in the world, and drug innovation is correspondingly high. In 2000, U.S.-based firms developed 29 of the 75 top-selling drugs; firms from the second-largest market, Japan, developed eight, and the United Kingdom contributed 10. France, which imposes price controls, developed three. Throughout the 1990s, outcomes were similar. Controversies Controversies concerning pharmaceutical drugs include patient access to drugs under development and not yet approved, pricing, and environmental issues. Access to unapproved drugs Governments worldwide have created provisions for granting access to drugs prior to approval for patients who have exhausted all alternative treatment options and do not match clinical trial entry criteria. Often grouped under the labels of compassionate use, expanded access, or named patient supply, these programs are governed by rules which vary by country defining access criteria, data collection, promotion, and control of drug distribution. Within the United States, pre-approval demand is generally met through treatment IND (investigational new drug) applications (INDs), or single-patient INDs. These mechanisms, which fall under the label of expanded access programs, provide access to drugs for groups of patients or individuals residing in the US. Outside the US, Named Patient Programs provide controlled, pre-approval access to drugs in response to requests by physicians on behalf of specific, or "named", patients before those medicines are licensed in the patient's home country. Through these programs, patients are able to access drugs in late-stage clinical trials or approved in other countries for a genuine, unmet medical need, before those drugs have been licensed in the patient's home country. Patients who have not been able to get access to drugs in development have organized and advocated for greater access. In the United States, ACT UP formed in the 1980s, and eventually formed its Treatment Action Group in part to pressure the US government to put more resources into discovering treatments for AIDS and then to speed release of drugs that were under development. The Abigail Alliance was established in November 2001 by Frank Burroughs in memory of his daughter, Abigail. The Alliance seeks broader availability of investigational drugs on behalf of terminally ill patients. In 2013, BioMarin Pharmaceutical was at the center of a high-profile debate regarding expanded access of cancer patients to experimental drugs. Access to medicines and drug pricing Essential medicines, as defined by the World Health Organization (WHO), are "those drugs that satisfy the health care needs of the majority of the population; they should therefore be available at all times in adequate amounts and in appropriate dosage forms, at a price the community can afford." Recent studies have found that most of the medicines on the WHO essential medicines list, outside of the field of HIV drugs, are not patented in the developing world, and that lack of widespread access to these medicines arise from issues fundamental to economic development – lack of infrastructure and poverty. Médecins Sans Frontières also runs a Campaign for Access to Essential Medicines campaign, which includes advocacy for greater resources to be devoted to currently untreatable diseases that primarily occur in the developing world. The Access to Medicine Index tracks how well pharmaceutical companies make their products available in the developing world. World Trade Organization negotiations in the 1990s, including the TRIPS Agreement and the Doha Declaration, have centered on issues at the intersection of international trade in pharmaceuticals and intellectual property rights, with developed world nations seeking strong intellectual property rights to protect investments made to develop new drugs, and developing world nations seeking to promote their generic pharmaceuticals industries and their ability to make medicine available to their people via compulsory licenses. Some have raised ethical objections specifically with respect to pharmaceutical patents and the high prices for drugs that they enable their proprietors to charge, which poor people around the world, cannot afford. Critics also question the rationale that exclusive patent rights and the resulting high prices are required for pharmaceutical companies to recoup the large investments needed for research and development. One study concluded that marketing expenditures for new drugs often doubled the amount that was allocated for research and development. Other critics claim that patent settlements would be costly for consumers, the health care system, and state and federal governments because it would result in delaying access to lower cost generic medicines. Novartis fought a protracted battle with the government of India over the patenting of its drug, Gleevec, in India, which ended up in a Supreme Court in a case known as Novartis v. Union of India & Others. The Supreme Court ruled narrowly against Novartis, but opponents of patenting drugs claimed it as a major victory. Environmental issues Pharmaceutical medications are commonly described as "ubiquitous" in nearly every type of environmental medium (i.e. lakes, rivers, streams, estuaries, seawater, and soil) worldwide. Their chemical components are typically present at relatively low concentrations in the ng/L to μg/L ranges. The primary avenue for medications reaching the environment are through the effluent of wastewater treatment plants, both from industrial plants during production, and from municipal plants after consumption. Agricultural pollution is another significant source derived from the prevalence of antibiotic use in livestock. Scientists generally divide environmental impacts of a chemical into three primary categories: persistence, bioaccumulation, and toxicity. Since medications are inherently bio-active, most are naturally degradable in the environment, however they are classified as "pseudopersistent" because they are constantly being replenished from their sources. These Environmentally Persistent Pharmaceutical Pollutants (EPPPs) rarely reach toxic concentrations in the environment, however they have been known to bioaccumulate in some species. Their effects have been observed to compound gradually across food webs, rather than becoming acute, leading to their classification by the US Geological Survey as "Ecological Disrupting Compounds."
Biology and health sciences
Drugs and medication
null
180210
https://en.wikipedia.org/wiki/Biogeographic%20realm
Biogeographic realm
A biogeographic realm is the broadest biogeographic division of Earth's land surface, based on distributional patterns of terrestrial organisms. They are subdivided into bioregions, which are further subdivided into ecoregions. A biogeographic realm is also known as "ecozone", although that term may also refer to ecoregions. Description The realms delineate large areas of Earth's surface within which organisms have evolved in relative isolation over long periods of time, separated by geographic features, such as oceans, broad deserts, or high mountain ranges, that constitute natural barriers to migration. As such, biogeographic realm designations are used to indicate general groupings of organisms based on their shared biogeography. Biogeographic realms correspond to the floristic kingdoms of botany or zoogeographic regions of zoology. From 1872, Alfred Russel Wallace developed a system of zoogeographic regions, extending the ornithologist Philip Sclater's system of six regions. Biogeographic realms are characterized by the evolutionary history of the organisms they contain. They are distinct from biomes, also known as major habitat types, which are divisions of the Earth's surface based on life form, or the adaptation of animals, fungi, micro-organisms and plants to climatic, soil, and other conditions. Biomes are characterized by similar climax vegetation. Each realm may include a number of different biomes. A tropical moist broadleaf forest in Central America, for example, may be similar to one in New Guinea in its vegetation type and structure, climate, soils, etc., but these forests are inhabited by animals, fungi, micro-organisms and plants with very different evolutionary histories. The distribution of organisms among the world's biogeographic realms has been influenced by the distribution of landmasses, as shaped by plate tectonics over the geological history of the Earth. Concept history The "biogeographic realms" of Udvardy were defined based on taxonomic composition. The rank corresponds more or less to the floristic kingdoms and zoogeographic regions. The usage of the term "ecozone" is more variable. Beginning in the 1960s, it was used originally in the field of biostratigraphy to denote intervals of geological strata with fossil content demonstrating a specific ecology. In Canadian literature, the term was used by Wiken in macro level land classification, with geographic criteria (see Ecozones of Canada). Later, Schultz would use it with ecological and physiognomical criteria, in a way similar to the concept of biome. In the Global 200/WWF scheme, originally the term "biogeographic realm" in Udvardy sense was used. However, in a scheme of BBC, it was replaced by the term "ecozone". Terrestrial biogeographic realms Udvardy biogeographic realms WWF / Global 200 biogeographic realms The World Wildlife Fund scheme is broadly similar to Miklos Udvardy's system, the chief difference being the delineation of the Australasian realm relative to the Antarctic, Oceanic, and Indomalayan realms. In the WWF system, the Australasia realm includes Australia, Tasmania, the islands of Wallacea, New Guinea, the East Melanesian Islands, New Caledonia, and New Zealand. Udvardy's Australian realm includes only Australia and Tasmania; he places Wallacea in the Indomalayan Realm, New Guinea, New Caledonia, and East Melanesia in the Oceanian Realm, and New Zealand in the Antarctic Realm. The Palearctic and Nearctic are sometimes grouped into the Holarctic realm. Morrone biogeographic kingdoms Following the nomenclatural conventions set out in the International Code of Area Nomenclature, Morrone defined the next biogeographic kingdoms (or realms) and regions: Holarctic kingdom Heilprin (1887) Nearctic region Sclater (1858) Palearctic region Sclater (1858) Holotropical kingdom Rapoport (1968) Neotropical region Sclater (1858) Ethiopian region Sclater (1858) Oriental region Wallace (1876) Austral kingdom Engler (1899) Cape region Grisebach (1872) Andean region Engler (1882) Australian region Sclater (1858) Antarctic region Grisebach (1872) Transition zones: Mexican transition zone (Nearctic–Neotropical transition) Saharo-Arabian transition zone (Palearctic–Ethiopian transition) Chinese transition zone (Palearctic–Oriental transition zone transition) Indo-Malayan, Indonesian or Wallace's transition zone (Oriental–Australian transition) South American transition zone (Neotropical–Austral transition) Freshwater biogeographic realms The applicability of Udvardy scheme to most freshwater taxa is unresolved. The drainage basins of the principal oceans and seas of the world are marked by continental divides. The grey areas are endorheic basins that do not drain to the ocean. Marine biogeographic realms According to Briggs and Morrone: According to the WWF scheme:
Biology and health sciences
Ecology
Biology
180211
https://en.wikipedia.org/wiki/Precious%20metal
Precious metal
Precious metals are rare, naturally occurring metallic chemical elements of high economic value. Precious metals, particularly the noble metals, are more corrosion resistant and less chemically reactive than most elements. They are usually ductile and have a high lustre. Historically, precious metals were important as currency but they are now regarded mainly as investment and industrial raw materials. Gold, silver, platinum, and palladium each have an ISO 4217 currency code. The best known precious metals are the precious coinage metals, which are gold and silver. Although both have industrial uses, they are better known for their uses in art, jewelry, and coinage. Other precious metals include the platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum, of which platinum is the most widely traded. The demand for precious metals is driven not only by their practical use but also by their role as investments and a store of value. Historically, precious metals have commanded much higher prices than common industrial metals. Bullion A metal is deemed to be precious if it is rare. The discovery of new sources of ore or improvements in mining or refining processes may cause the value of a precious metal to diminish. The status of a "precious" metal can also be determined by high demand or market value. Precious metals in bulk form are known as bullion and are traded on commodity markets. Bullion metals may be cast into ingots or minted into coins. The defining attribute of bullion is that it is valued by its mass and purity rather than by a face value as money. Purity and mass The level of purity varies from issue to issue. "Three nines" (99.9%) purity is common. The purest mass-produced bullion coins are in the Canadian Gold Maple Leaf series, which go up to 99.999% purity. A 100% pure bullion is nearly impossible: as the percentage of impurities diminishes, it becomes progressively more difficult to purify the metal further. Historically, coins had a certain amount of weight of alloy, with the purity a local standard. The Krugerrand is the first modern example of measuring in "pure gold": it should contain at least ounces of at least pure gold. Other bullion coins (for example the British Sovereign) show neither the purity nor the fine-gold weight on the coin but are recognized and consistent in their composition. Many coins historically showed a denomination in currency (example: American double eagle: $20). Coinage Many nations mint bullion coins. Although nominally issued as legal tender, these coins' face value as currency is far below their value as bullion. For instance, Canada mints a gold bullion coin (the Gold Maple Leaf) at a face value of $50 containing one troy ounce (31.1035 g) of gold, as of January 2022. The USD to CAD exchange rate averaged 1.129 in July 2009 according to OANDA Historical Exchange Rates. Although the exact moment that the $1,075 figure was determined is unknown, it may be considered a reasonable value for the time. Bullion coins' minting by national governments gives them some numismatic value in addition to their bullion value, as well as certifying their purity. One of the largest bullion coins in the world was the 10,000-dollar Australian Gold Nugget coin minted in Australia, which consists of a full kilogram of 99.9% pure gold. In 2012, the Perth Mint produced a 1-tonne coin of 99.99% pure gold with a face value of $1 million AUD, making it the largest minted coin in the world with a gold value of around $50 million AUD. China has produced coins in very limited quantities (less than 20 pieces minted) that exceed of gold. Austria has minted a coin containing 31 kg of gold (the Vienna Philharmonic Coin minted in 2004 with a face value of 100,000 euro). As a stunt to publicise the 99.999% pure one-ounce Canadian Gold Maple Leaf series, in 2007 the Royal Canadian Mint made a 100 kg 99.999% gold coin, with a face value of $1 million, and now manufactures them to order, but at a substantial premium over the market value of the gold. The Reserve Bank of Zimbabwe mints the gold Mosi-oa-Tunya (coin) which is recognized as legal tender at the market value for its gold content. Economic use Gold and silver, and sometimes other precious metals, are often seen as defensive assets against both inflation and economic downturn. Silver coins have become popular with collectors due to their relative affordability, and, unlike most gold and platinum issues which are valued based upon the markets, silver issues are more often valued as collectibles, at far higher than their bullion value. Industrial use Platinum and palladium are key catalysts in hydrogenation reactions and emission-reducing catalytic converters, while gold is used in oxidation reactions and nanotechnology due to its stability. Platinum group metals(PGMs) have been used in the production of sulfuric and nitric acid for centuries. Additionally, gold and silver nanoparticles are used in biosensors and solar cells, underscoring their value in sustainable technologies. Cultural and artistic use Precious metals such as gold, silver, and platinum have been used for millennia to create objects of cultural and artistic significance. In jewelry, they are a cornerstone for crafting wedding bands, engagement rings, and ceremonial adornments, often symbolizing love, commitment, and social status. Beyond jewelry, these metals are employed in fine art, including sculptures, decorative artifacts, and religious icons, showcasing their versatility and aesthetic appeal. Wedding bands, in particular, remain a significant driver of demand for gold and platinum, blending economic and cultural value. As modern tastes evolve, the use of recycled metals and innovative designs has brought a sustainable dimension to their cultural and artistic applications, reflecting contemporary values while maintaining their historical significance. Aluminium Aluminium is now commonplace but was considered to be a precious metal until the late 1800s. Although aluminium is the third most abundant element and the most abundant metal in the Earth's crust, it was at first found to be exceedingly difficult to extract the metal from its various non-metallic ores. The great expense of refining the metal made the small available quantity of pure aluminium more valuable than gold. Bars of aluminium were exhibited at the Exposition Universelle of 1855, and Napoleon III's most important guests were given aluminium cutlery, while those less worthy dined with mere silver. In 1884, the pyramidal capstone of the Washington Monument was cast of 100 ounces of pure aluminium. By that time, aluminium was as expensive as silver. The statue of Anteros atop the Shaftesbury Memorial Fountain (1885–1893) in London's Piccadilly Circus is also of cast aluminium. Over time, however, the price of the metal has dropped. The dawn of commercial electric generation in 1882 and the invention of the Hall–Héroult process in 1886 caused the price of aluminium to drop substantially over a short period of time. Rough world market price ($/kg)
Physical sciences
d-Block
Chemistry
180234
https://en.wikipedia.org/wiki/Kilowatt-hour
Kilowatt-hour
A kilowatt-hour (unit symbol: kW⋅h or kW h; commonly written as kWh) is a non-SI unit of energy equal to 3.6 megajoules (MJ) in SI units, which is the energy delivered by one kilowatt of power for one hour. Kilowatt-hours are a common billing unit for electrical energy supplied by electric utilities. Metric prefixes are used for multiples and submultiples of the basic unit, the watt-hour (3.6 kJ). Definition The kilowatt-hour is a composite unit of energy equal to one kilowatt (kW) sustained for (multiplied by) one hour. The International System of Units (SI) unit of energy meanwhile is the joule (symbol J). Because a watt is by definition one joule per second, and because there are 3,600 seconds in an hour, one kWh equals 3,600 kilojoules or 3.6 MJ. Unit representations A widely used representation of the kilowatt-hour is kWh, derived from its component units, kilowatt and hour. It is commonly used in billing for delivered energy to consumers by electric utility companies, and in commercial, educational, and scientific publications, and in the media. It is also the usual unit representation in electrical power engineering. This common representation, however, does not comply with the style guide of the International System of Units (SI). Other representations of the unit may be encountered: kW⋅h and kW h are less commonly used, but they are consistent with the SI. The SI brochure states that in forming a compound unit symbol, "Multiplication must be indicated by a space or a half-high (centred) dot (⋅), since otherwise some prefixes could be misinterpreted as a unit symbol." This is supported by a standard issued jointly by an international (IEEE) and national (ASTM) organization, and by a major style guide. However, the IEEE/ASTM standard allows kWh (but does not mention other multiples of the watt-hour). One guide published by NIST specifically recommends against kWh "to avoid possible confusion". In 2014, the United States official fuel-economy window sticker for electric vehicles used the abbreviation kW-hrs. Variations in capitalization are sometimes encountered: KWh, KWH, kwh, etc., which are inconsistent with the International System of Units. The notation kW/h for the kilowatt-hour is incorrect, as it denotes kilowatt per hour. The hour is a unit of time listed among the non-SI units accepted by the International Bureau of Weights and Measures for use with the SI. An electric heater consuming 1,000 watts (1 kilowatt) operating for one hour uses one kilowatt-hour of energy. A television consuming 100 watts operating continuously for 10 hours uses one kilowatt-hour. A 40-watt electric appliance operating continuously for 25 hours uses one kilowatt-hour. Electricity sales Electrical energy is typically sold to consumers in kilowatt-hours. The cost of running an electrical device is calculated by multiplying the device's power consumption in kilowatts by the operating time in hours, and by the price per kilowatt-hour. The unit price of electricity charged by utility companies may depend on the customer's consumption profile over time. Prices vary considerably by locality. In the United States prices in different states can vary by a factor of three. While smaller customer loads are usually billed only for energy, transmission services, and the rated capacity, larger consumers also pay for peak power consumption, the greatest power recorded in a fairly short time, such as 15 minutes. This compensates the power company for maintaining the infrastructure needed to provide peak power. These charges are billed as demand changes. Industrial users may also have extra charges according to the power factor of their load. Major energy production or consumption is often expressed as terawatt-hours (TWh) for a given period that is often a calendar year or financial year. A 365-day year equals 8,760 hours, so over a period of one year, power of one gigawatt equates to 8.76 terawatt-hours of energy. Conversely, one terawatt-hour is equal to a sustained power of about 114 megawatts for a period of one year. Examples In 2020, the average household in the United States consumed 893 kWh per month. Raising the temperature of 1 litre of water from room temperature to the boiling point with an electric kettle takes about 0.1 kWh. A 12-watt LED lamp lit constantly uses about 0.3 kWh per 24 hours and about 9 kWh per month. In terms of human power, a healthy adult male manual laborer performs work equal to about half a kilowatt-hour over an eight-hour day. Conversions To convert a quantity measured in a unit in the left column to the units in the top row, multiply by the factor in the cell where the row and column intersect. Watt-hour multiples All the SI prefixes are commonly applied to the watt-hour: a kilowatt-hour (kWh) is 1,000 Wh; a megawatt-hour (MWh) is 1 million Wh; a milliwatt-hour (mWh) is and so on. The kilowatt-hour is commonly used by electrical energy providers for purposes of billing, since the monthly energy consumption of a typical residential customer ranges from a few hundred to a few thousand kilowatt-hours. Megawatt-hours (MWh), gigawatt-hours (GWh), and terawatt-hours (TWh) are often used for metering larger amounts of electrical energy to industrial customers and in power generation. The terawatt-hour and petawatt-hour (PWh) units are large enough to conveniently express the annual electricity generation for whole countries and the world energy consumption. Distinction between kWh (energy) and kW (power) A kilowatt is a unit of power (rate of flow of energy per unit of time). A kilowatt-hour is a unit of energy. Kilowatt per hour would be a rate of change of power flow with time. Work is the amount of energy transferred to a system; power is the rate of delivery of energy. Energy is measured in joules, or watt-seconds. Power is measured in watts, or joules per second. For example, a battery stores energy. When the battery delivers its energy, it does so at a certain power, that is, the rate of delivery of the energy. The higher the power, the quicker the battery's stored energy is delivered. A higher power output will cause the battery's stored energy to be depleted in a shorter time period. Annualized power Electric energy production and consumption are sometimes reported on a yearly basis, in units such as megawatt-hours per year (MWh/yr) gigawatt-hours/year (GWh/yr) or terawatt-hours per year (TWh/yr). These units have dimensions of energy divided by time and thus are units of power. They can be converted to SI power units by dividing by the number of hours in a year, about . Thus, = ≈ . Misuse of watts per hour Many compound units for various kinds of rates explicitly mention units of time to indicate a change over time. For example: miles per hour, kilometres per hour, dollars per hour. Power units, such as kW, already measure the rate of energy per unit time (kW=kJ/s). Kilowatt-hours are a product of power and time, not a rate of change of power with time. Watts per hour (W/h) is a unit of a change of power per hour, i.e. an acceleration in the delivery of energy. It is used to measure the daily variation of demand (e.g. the slope of the duck curve), or ramp-up behavior of power plants. For example, a power plant that reaches a power output of from in 15 minutes has a ramp-up rate of . Other uses of terms such as watts per hour are likely to be errors. Other related energy units Several other units related to kilowatt-hour are commonly used to indicate power or energy capacity or use in specific application areas. Average annual energy production or consumption can be expressed in kilowatt-hours per year. This is used with loads or output that vary during the year but whose annual totals are similar from one year to the next. For example, it is useful to compare the energy efficiency of household appliances whose power consumption varies with time or the season of the year. Another use is to measure the energy produced by a distributed power source. One kilowatt-hour per year equals about 114.08 milliwatts applied constantly during one year. The energy content of a battery is usually expressed indirectly by its capacity in ampere-hours; to convert ampere-hour (Ah) to watt-hours (Wh), the ampere-hour value must be multiplied by the voltage of the power source. This value is approximate, since the battery voltage is not constant during its discharge, and because higher discharge rates reduce the total amount of energy that the battery can provide. In the case of devices that output a different voltage than the battery, it is the battery voltage (typically 3.7 V for Li-ion) that must be used to calculate rather than the device output (for example, usually 5.0 V for USB portable chargers). This results in a 500 mA USB device running for about 3.7 hours on a 2,500 mAh battery, not five hours. The Board of Trade unit (B.T.U.) is an obsolete UK synonym for kilowatt-hour. The term derives from the name of the Board of Trade which regulated the electricity industry until 1942 when the Ministry of Power took over. It is distinct from a British Thermal Unit (BTU) which is 1055 J. In India, the kilowatt-hour is often simply called a unit of energy. A million units, designated MU, is a gigawatt-hour and a BU (billion units) is a terawatt-hour.
Physical sciences
Energy
Basics and measurement
180251
https://en.wikipedia.org/wiki/SPICE
SPICE
SPICE (Simulation Program with Integrated Circuit Emphasis) is a general-purpose, open-source analog electronic circuit simulator. It is a program used in integrated circuit and board-level design to check the integrity of circuit designs and to predict circuit behavior. Introduction Unlike board-level designs composed of discrete parts, it is not practical to breadboard integrated circuits before manufacture. Further, the high costs of photolithographic masks and other manufacturing prerequisites make it essential to design the circuit to be as close to perfect as possible before the integrated circuit is first built. Simulating the circuit with SPICE is the industry-standard way to verify circuit operation at the transistor level before committing to manufacturing an integrated circuit. The SPICE simulators help to predict the behavior of the IC under different operating conditions, such as different voltage and current levels, temperature variations, and noise. Board-level circuit designs can often be breadboarded for testing. Even with a breadboard, some circuit properties may not be accurate compared to the final printed wiring board, such as parasitic resistances and capacitances, whose effects can often be estimated more accurately using simulation. Also, designers may want more information about the circuit than is available from a single mock-up. For instance, circuit performance is affected by component manufacturing tolerances. In these cases it is common to use SPICE to perform Monte Carlo simulations of the effect of component variations on performance, a task which is impractical using calculations by hand for a circuit of any appreciable complexity. Circuit simulation programs, of which SPICE and derivatives are the most prominent, take a text netlist describing the circuit elements (transistors, resistors, capacitors, etc.) and their connections, and translate this description into equations to be solved. The general equations produced are nonlinear differential algebraic equations which are solved using implicit integration methods, Newton's method and sparse matrix techniques. Origins SPICE was developed at the Electronics Research Laboratory of the University of California, Berkeley by Laurence Nagel with direction from his research advisor, Prof. Donald Pederson. SPICE1 is largely a derivative of the CANCER program, which Nagel had worked on under Prof. Ronald Rohrer. CANCER is an acronym for "Computer Analysis of Nonlinear Circuits, Excluding Radiation". At these times many circuit simulators were developed under contracts with the United States Department of Defense that needed the ability to evaluate the radiation hardness of a circuit. When Nagel's original advisor, Prof. Rohrer, left Berkeley, Prof. Pederson became his advisor. Pederson insisted that CANCER, a proprietary program, be rewritten enough that restrictions could be removed and the program could be put in the public domain. SPICE1 was first presented at a conference in 1973. SPICE1 is coded in FORTRAN and to construct the circuit equations uses nodal analysis, which has limitations in representing inductors, floating voltage sources and the various forms of controlled sources. SPICE1 has relatively few circuit elements available and uses a fixed-timestep transient analysis. The real popularity of SPICE started with SPICE2 in 1975. SPICE2, also coded in FORTRAN, is a much-improved program with more circuit elements, variable timestep transient analysis using either the trapezoidal (second order Adams-Moulton method) or the Gear integration method (also known as BDF), equation formulation via modified nodal analysis (avoiding the limitations of nodal analysis), and an innovative FORTRAN-based memory allocation system. Ellis Cohen led development from version 2B to the industry standard SPICE 2G6, the last FORTRAN version, released in 1983. SPICE3 was developed by Thomas Quarles (with A. Richard Newton as advisor) in 1989. It is written in C, uses the same netlist syntax, and added X Window System plotting. As an early public domain software program with source code available, SPICE was widely distributed and used. Its ubiquity became such that "to SPICE a circuit" remains synonymous with circuit simulation. SPICE source code was from the beginning distributed by UC Berkeley for a nominal charge (to cover the cost of magnetic tape). The license originally included distribution restrictions for countries not considered friendly to the US, but the source code is currently covered by the BSD license. The birth of SPICE was named an IEEE Milestone in 2011; the entry mentions that SPICE "evolved to become the worldwide standard integrated circuit simulator". Nagel was awarded the 2019 IEEE Donald O. Pederson Award in Solid-State Circuits for the development of SPICE. Successors Open-source successors No newer versions of Berkeley SPICE have been released after version 3f5 in 1993. Since then, the open-source or academic continuations of SPICE include: XSPICE, developed at Georgia Tech, which added mixed analog/digital "code models" for behavioral simulation; CIDER (previously CODECS), developed by UC Berkeley and Oregon State University, which added semiconductor device simulation; Ngspice, based on SPICE 3f5; WRspice, a C++ re-write of the original spice3f5 code. Other open-source simulators not developed by academics are QUCS, QUCS-S, Xyce, and Qucsator. Commercial versions and spinoffs Berkeley SPICE inspired and served as a basis for many other circuit simulation programs, in academia, in industry, and in commercial products. The first commercial version of SPICE is ISPICE, an interactive version on a timeshare service, National CSS. The most prominent commercial versions of SPICE include HSPICE (originally commercialized by Ashawna and Kim Hailey of Meta Software, but now owned by Synopsys) and PSPICE (now owned by Cadence Design Systems). The integrated circuit industry adopted SPICE quickly, and until commercial versions became well developed many IC design houses had proprietary versions of SPICE. Today a few IC manufacturers, typically the larger companies, have groups continuing to develop SPICE-based circuit simulation programs. Among these are ADICE and LTspice at Analog Devices, QSPICE at Qorvo, MCSPICE, followed by Mica at Freescale Semiconductor, now NXP Semiconductors, and TINA-TI at Texas Instruments. Both LTspice and TINA-TI come bundled with models from their respective company. Other companies maintain internal circuit simulators which are not directly based upon SPICE, among them PowerSpice at IBM, TITAN at Infineon Technologies, Lynx at Intel Corporation, and Pstar at NXP Semiconductors also. Program features and structure SPICE became popular because it contained the analyses and models needed to design integrated circuits of the time, and was robust enough and fast enough to be practical to use. Precursors to SPICE often had a single purpose: The BIAS program, for example, did simulation of bipolar transistor circuit operating points; the SLIC program did only small-signal analyses. SPICE combined operating point solutions, transient analysis, and various small-signal analyses with the circuit elements and device models needed to successfully simulate many circuits. Analyses SPICE2 includes these analyses: AC analysis (linear small-signal frequency domain analysis) DC analysis (nonlinear quiescent point calculation) DC transfer curve analysis (a sequence of nonlinear operating points calculated while sweeping an input voltage or current, or a circuit parameter) Noise analysis (a small signal analysis done using an adjoint matrix technique which sums uncorrelated noise currents at a chosen output point) Transfer function analysis (a small-signal input/output gain and impedance calculation) Transient analysis (time-domain large-signal solution of nonlinear differential algebraic equations) Since SPICE is generally used to model circuits with nonlinear elements, the small signal analyses are necessarily preceded by a quiescent point calculation at which the circuit is linearized. SPICE2 also contains code for other small-signal analyses: sensitivity analysis, pole-zero analysis, and small-signal distortion analysis. Analysis at various temperatures is done by automatically updating semiconductor model parameters for temperature, allowing the circuit to be simulated at temperature extremes. Other circuit simulators have since added many analyses beyond those in SPICE2 to address changing industry requirements. Parametric sweeps were added to analyze circuit performance with changing manufacturing tolerances or operating conditions. Loop gain and stability calculations were added for analog circuits. Harmonic balance or time-domain steady state analyses were added for RF and switched-capacitor circuit design. However, a public-domain circuit simulator containing the modern analyses and features needed to become a successor in popularity to SPICE has not yet emerged. It is very important to use appropriate analyses with carefully chosen parameters. For example, application of linear analysis to nonlinear circuits should be justified separately. Also, application of transient analysis with default simulation parameters can lead to qualitatively wrong conclusions on circuit dynamics. Device models SPICE2 includes many semiconductor device compact models: three levels of MOSFET model, a combined Ebers–Moll and Gummel–Poon bipolar model, a JFET model, and a model for a junction diode. In addition, it had many other elements: resistors, capacitors, inductors (including coupling), independent voltage and current sources, ideal transmission lines, active components and voltage and current controlled sources. SPICE3 added more sophisticated MOSFET models, which were needed due to advances in semiconductor technology. In particular, the BSIM family of models were added, which were also developed at UC Berkeley. Commercial and industrial SPICE simulators have added many other device models as technology advanced and earlier models became inadequate. To attempt standardization of these models so that a set of model parameters may be used in different simulators, an industry working group was formed, the Compact Model Council, to choose, maintain and promote the use of standard models. The standard models today include BSIM3, BSIM4, BSIMSOI, PSP, HICUM, and MEXTRAM. Spice can use device models from foundry PDKs. Input and output: Netlists, schematic capture and plotting SPICE2 takes a text netlist as input and produces line-printer listings as output, which fits with the computing environment in 1975. These listings are either columns of numbers corresponding to calculated outputs (typically voltages or currents), or line-printer character "plots". SPICE3 retains the netlist for circuit description, but allows analyses to be controlled from a command-line interface similar to the C shell. SPICE3 also added basic X plotting, as UNIX and engineering workstations became common. Vendors and various free software projects have added schematic capture frontends to SPICE, allowing a schematic diagram of the circuit to be drawn and the netlist to be automatically generated and transferred to various SPICE backends. Also, graphical user interfaces were added for selecting the simulations to be done and manipulating the voltage and current output vectors. In addition, very capable graphing utilities have been added to see waveforms and graphs of parametric dependencies. Several free versions of these extended programs are available. SPICE usage beyond electronic simulation As SPICE generally solves non-linear differential algebraic equations, it may be applied to simulating beyond the electrical realm. Most prominent are thermal simulations, as thermal systems may be described by lumped circuit elements mapping onto the electronic SPICE elements (heat capacity → capacitance, thermal conductance/resistance → conductance/resistance, temperature → voltage, heat flow or heat generated → current ). As thermal and electronic systems are closely linked by power dissipation and cooling systems, electro-thermal simulation today is supported by semiconductor device manufacturers offering (transistor) models with both electrical and thermal nodes. So one may obtain electrical power dissipation, resulting in self-heating causing parameter variations, and cooling system efficiency in a single simulation run. SPICE may very well simulate the electronics part of a motor drive. However it will equally well describe the electro-mechanical model of the motor. Again this is achieved by mapping mechanical onto the electrical elements (torque → voltage, angular velocity → current, coefficient of viscous friction → resistance, moment of inertia → inductance). So again the final model consists of only SPICE compatible lumped circuit elements, but one gains mechanical together with electrical data during simulation. Electromagnetic modeling is accessible to a SPICE simulator via the PEEC (partial element equivalent circuit) method. Maxwell's equations have been mapped, RLC, Skin effect, dielectric or magnetic materials and incident or radiated fields have been modelled. However, as of 2019, SPICE cannot be used to "simulate photonics and electronics together in a photonic circuit simulator", and thus it is not yet considered as a test simulator for photonic integrated circuits. Micro-fluidic circuits have been modelled with SPICE by creating a pneumatic FET. SPICE has been applied to model the interface between biological and electronic systems, e.g. as a design tools for synthetic biology and for the virtual prototyping of biosensors and lab-on-chip. SPICE has been applied in operations research to evaluate perturbed supply chains.
Technology
Electronics: General
null
180279
https://en.wikipedia.org/wiki/Ounce
Ounce
The ounce () is any of several different units of mass, weight, or volume and is derived almost unchanged from the , an Ancient Roman unit of measurement. The avoirdupois ounce (exactly ) is avoirdupois pound; this is the United States customary and British imperial ounce. It is primarily used in the United States. Although the avoirdupois ounce is the mass measure used for most purposes, the 'troy ounce' of exactly is used instead for the mass of precious metals such as gold, silver, platinum, palladium, rhodium, etc. The term 'ounce' is also used in other contexts: The ounce-force is a measure of force (see below). The fluid ounce is a measure of volume. Historically, a variety of different ounces measuring mass or volume were used in different jurisdictions by different trades and at different times in history. Etymology Ounce derives from the Ancient Roman (meaning: a twelfth), a unit in the Ancient Roman units of measurement weighing about 27.4 grams or 96.7% of an avoirdupois ounce, that was one-twelfth () of the Roman pound (). This in turn comes from Latin ('one'), and thus originally meant simply 'unit'. The term uncia was borrowed twice: first into Old English as or from an unattested Vulgar Latin form with ts for c before i (palatalization), which survives in modern English as inch, and a second time into Middle English through Anglo-Norman and Middle French (), yielding English ounce. The abbreviation oz came later from the Italian cognate , pronounced (now , pronounced ). Definitions Historically, in different parts of the world, at different points in time, and for different applications, the ounce (or its translation) has referred to broadly similar but still slightly different standards of mass. Currently in use International avoirdupois ounce The international avoirdupois ounce (abbreviated oz) is defined as exactly 28.349523125 g under the international yard and pound agreement of 1959, signed by the United States and countries of the Commonwealth of Nations. In the avoirdupois system, sixteen ounces make up an avoirdupois pound, and the avoirdupois pound is defined as 7000 grains; one avoirdupois ounce is therefore equal to 437.5 grains. The ounce is still a standard unit in the United States. In the United Kingdom it ceased to be an independent unit of measure in 2000, but may still be seen as a general indicator of portion sizes in burger and steak restaurants. International troy ounce A troy ounce (abbreviated oz t) is equal to 480 grains. Consequently, the international troy ounce is equal to exactly 31.1034768 grams. There are 12 troy ounces in the now obsolete troy pound. Today, the troy ounce is used only to express the mass of precious metals such as gold, platinum, palladium, rhodium or silver. Bullion coins are the most common products produced and marketed in troy ounces, but precious metal bars also exist in gram and kilogram (kg) sizes. (A kilogram bullion bar contains .) For historical measurement of gold, a fine ounce is a troy ounce of pure gold content in a gold bar, computed as fineness multiplied by gross weight a standard ounce is a troy ounce of 22 carat gold, 91.66% pure (an 11 to 1 proportion of gold to alloy material) Metric ounces Some countries have redefined their ounces in the metric system. For example, the German apothecaries' ounce of 30 grams is very close to the previously widespread Nuremberg ounce, but the divisions and multiples come out in metric. In 1820, the Dutch redefined their ounce (in Dutch, ons) as 100 grams. In 1937 the IJkwet of the Netherlands officially abolished the term, but it is still commonly used. Dutch amendments to the metric system, such as an ons or 100 grams, has been inherited, adopted, and taught in Indonesia beginning in elementary school. It is also listed as standard usage in Indonesia's national dictionary, the Kamus Besar Bahasa Indonesia, and the government's official elementary-school curriculum. Historical Apothecaries' ounce The apothecaries' ounce (abbreviated ℥) equivalent to the troy ounce, was formerly used by apothecaries, and is thus obsolete. Maria Theresa ounce "Maria Theresa ounce" was once introduced in Ethiopia and some European countries, which was equal to the weight of one Maria Theresa thaler, or 28.0668 g. Both the weight and the value are the definition of one birr, still in use in present-day Ethiopia and formerly in Eritrea. Spanish ounce The Spanish pound () was 460 g. The Spanish ounce (Spanish ) was of a pound, i.e. 28.75 g. It was further subdivided into 16 (each 1.8 grams). For pharmaceutical use, the Greek was used, subdividing the Spanish ounce into 8 (3.6 grams), due to being equivalent to of an avoirdupois ounce. In either case, it could be further subdivided into grains, each one 49.9 milligrams. Tower ounce The Tower ounce of was a fraction of the tower pound used in the English mints, the principal one being in the Tower of London. It dates back to the Anglo-Saxon coinage weight standard. It was abolished in favour of the Troy ounce by Henry VIII in 1527. Ounce-force An ounce-force is of a pound-force, or about . It is defined as the force exerted by a mass of one avoirdupois ounce under standard gravity (at the surface of the earth, its weight). The "ounce" in "ounce-force" is equivalent to an avoirdupois ounce; ounce-force is a measurement of force using avoirdupois ounces. It is customarily not identified or differentiated. The term has limited use in engineering calculations to simplify unit conversions between mass, force, and acceleration systems of calculations. Fluid ounce A fluid ounce (abbreviated fl oz, fl. oz. or oz. fl.) is a unit of volume. An imperial fluid ounce is defined in British law as exactly 28.4130625 millilitres, while a US customary fluid ounce is exactly 29.5735295625 mL, and a US food labelling fluid ounce is 30 mL. The fluid ounce is sometimes referred to simply as an "ounce" in contexts where its use is implicit, such as bartending. Other uses Fabric weight Ounces are also used to express the "weight", or more accurately the areal density, of a textile fabric in North America, Asia, or the UK, as in "16 oz denim". The number refers to the weight in ounces of a given amount of fabric, either a yard of a given width, or a square yard, where the depth of the fabric is a fabric-specific constant. Copper layer thickness of a printed circuit board The most common unit of measure for the copper thickness on a printed circuit board (PCB) is ounces (oz), as in mass. It is the resulting thickness when the mass of copper is pressed flat and spread evenly over a one-square-foot area. 1 oz will roughly equal 34.7 μm.
Physical sciences
Mass
null
180545
https://en.wikipedia.org/wiki/Daikon
Daikon
Daikon or mooli, Raphanus sativus var. longipinnatus, is a mild-flavored winter radish usually characterized by fast-growing leaves and a long, white, root. Originally native to continental East Asia, daikon is harvested and consumed throughout the region, as well as in South Asia, and is available internationally. In some locations, daikon is planted for its ability to break up compacted soils and recover nutrients and is not harvested. Names In culinary contexts, daikon () or are the most common names in all forms of English. Historical ties to South Asia permit mooli () as a general synonym in English. The generic terms white radish, winter radish, Oriental radish, long white radish, and other terms are also used. Other synonyms usually vary by region or describe regional varieties of the vegetable. When it is necessary to distinguish the usual Japanese form from others, it is sometimes known as Japanese radish. The vegetable's Chinese names are still uncommon in English. In most forms of Chinese cuisine, it is usually known as bái luóbo (white radish). Although in Cantonese and Malaysian cuisine, it is encountered as lobak or lo pak, which are Cantonese pronunciations of the general Chinese term for "radish" or "carrot" (). In the cuisines of Hokkien and Teochew-speaking areas such as Singapore, Thailand, and Taiwan, it is also known as chai tow or chai tau (). Any of these may be referred to as "radish," with the regional variety implied by context. In English-speaking countries, it is also sometimes marketed as icicle radish. In mainland China and Singapore, the calque white carrot or misnomer carrot is sometimes used, owing to the similarity of the vegetables' names in Mandarin and Hokkien. This variant inspired the title for a popular guidebook on Singaporean street food, There's No Carrot in Carrot Cake, which refers to chai tow kway, a kind of cake made from daikon. In North America, it is primarily grown not for food but as a fallow crop, with the roots left unharvested to prevent soil compaction; the leaves (if harvested) are used as animal fodder. The official general name used by the United States Department of Agriculture is oilseed radish, but this is only used in non-culinary contexts. Other English terms employed when daikon is used as animal feed or as a soil ripper are "forage radish", "fodder radish", and "tillage radish". In Hong Kong, the misnomer turnip is also used. This name lends its name to the dish "turnip cake". Varieties Several nonwhite varieties occur. The Cantonese lobak, lo pak, etc., sometimes refer to the usual Chinese form but is also applied to a form of daikon with a light green coloration of the top area of the root around the leaves. The Korean radish, also called mu, has a similar pale green shade halfway down from the top and are generally shorter, stouter, and sturdier, with denser flesh and softer leaves. Both are often spicier than the long white radishes. The heirloom watermelon radish is another Chinese variety of daikon with a dull green exterior but a bright rose or fuchsia-colored center. Its Chinese name xīnlǐměi luóbó) is sometimes irregularly romanized as the shinrimei radish and sometimes translated as the "beauty heart," "beautiful heart inside," or "roseheart" radish. Cultivation The Chinese and Indian varieties tolerate higher temperatures than the Japanese ones. These varieties also grow well at lower elevations in East Africa. If moisture is abundant, it can grow quickly; otherwise, the flesh becomes overly tough and pungent. The variety Long White Icicle is available as seed in Britain and will grow very successfully in Southern England, producing roots resembling a parsnip by midsummer in good garden soil in an average year. The roots can be stored for weeks without the leaves if lifted and kept in a cool, dry place. If left in the ground, the texture tends to become woody, but the storage life of whole untreated roots is not long. Certain varieties of daikon can be grown as a winter cover crop and green manure. These varieties are often named "tillage radish" because the plant grows a huge, penetrating root that effectively performs deep cultivation. The roots bring nutrients lower in the soil profile up into the higher reaches and are good nutrient scavengers, so they are good partners with legumes instead of grasses; if harsh winters occur, the root will decompose while in the soil, releasing early nitrogen stores in the spring. Culinary uses Japan In Japan, many types of pickles are made with daikon roots, including takuan and bettarazuke. Daikon roots can be served raw, in salads, or as sashimi'''s , which is prepared by meticulous . is frequently used as a garnish, often mixed into various dippings such as ponzu, a soy sauce and citrus juice condiment. The pink spicy is daikon grated with chili pepper. Simmered dishes are also popular such as oden. Daikon that has been shredded and dried (a common method of preserving food in Japan) is called . Daikon radish sprouts () are used raw for salad or garnishing sashimi. Daikon leaves are frequently eaten as a green vegetable. They are thorny when raw, so softening methods such as pickling and stir frying are common. The daikon leaf is one of the Festival of Seven Herbs, where it is called suzushiro. China In Chinese cuisine, turnip cake and chai tow kway are made with daikon. The variety called mooli has a high water content, and some cookbooks recommend salting (or sweetening, depending on the region and context) and draining it before it is cooked. Sometimes, mooli is used as a medium for elaborately carved garnishes. More commonly, daikon is referred as bailuobo (白蘿蔔) in Mandarin or lobak in Cantonese. Bailuobo is used in various dishes for its unique and mild flavour after being boiled and cooked. For soups, bailuobo can be seen in daikon and pork rib soup (白蘿蔔排骨湯), daikon and tomato soup (白蘿蔔番茄湯), daikon and tofu soup (白蘿蔔豆腐湯), etc. Delicacies such as "shredded daikon" (白蘿蔔絲) and "cut daikon" (白蘿蔔塊) are popular domestic dishes too. Similar to Japanese cuisine, there are many types of pickles (in Mandarin Chinese: 咸菜 xiáncài / 榨菜 zhàcài) made with daikon, for example, "sour-sweet cut daikon" (酸甜白蘿蔔塊), "spicy daikon" (麻辣白蘿蔔), daikon zhacai (白蘿蔔榨菜), etc. India In North India, daikon is a popular ingredient used to make sabzi, stuffed paranthas, pakodas, salads, pickles, and as garnish. The plant's leaves are used to make dal and kadhi, among other dishes. In South India, daikon is the principal ingredient in a variety of sambar, in which roundels of the radish are boiled with onions, tamarind pulp, lentils, and a special spice powder. When cooked, it can release a very strong odor. This soup, called mullangi sambar (, ; literally, "radish sambar") is very popular and is often mixed with rice. Vietnam In Vietnamese cuisine, sweet and sour pickled daikon and carrots ( or đồ chua) are a common condiment in bánh mì sandwiches. Philippines In the Philippines, the sour stew sinigang may include daikon. Daikon is known locally as labanos. Pakistan In Pakistani cuisine, the young leaves of the daikon plant are boiled and flash-fried with a mixture of heated oil, garlic, ginger, red chili, and various spices. The radish is eaten as a fresh salad, often seasoned with either salt and pepper or chaat masala. In Punjab province, daikon is used to stuff pan-fried breads known as paratha. Daikon's seed pods called moongray in local languages, are also eaten as a stir-fried dish across the country. Bangladesh In Bangladesh, fresh daikon is often finely grated and mixed with fresh chili, coriander, flaked steamed fish, lime juice, and salt. This light, refreshing preparation served alongside meals is known as mulo bhorta''. Taiwan In Taiwanese cuisine, both the root and the stems/leaves of the daikon are consumed. South Korea In South Korea, daikon radish is often used in kimchi, a traditional fermented dish. Kimchi is most commonly eaten as a side dish with rice, among other dishes. It is most commonly made with daikon radish, carrots, scallions, and other easily fermented vegetables. Gallery Nutrition Raw daikon is 95% water, 4% carbohydrates, and less than 1% each of protein and fat (table). In a reference amount of , raw daikon supplies 18 calories and is a rich source (20% or more of the Daily Value, DV) of vitamin C (27% DV), with no other micronutrients in significant content (table). Agricultural use Tillage radish leaves behind a cavity in the soil when the large taproot decays, making it easier for the following year's crops, such as potatoes, to bore deeper into the soil. Potatoes grown in a rotation with tillage radish do not experience growth restrictions associated with having a shallow hardpan soil, as the tillage radish can break the hardpan, making the transfer of water and other important nutrients much easier for the root system. Nutrient retention is another important feature of tillage radish. The large taproot is used to retain macro- and micro-nutrients that would otherwise have the potential to be lost to leaching during the time when the field would otherwise be left empty. The nutrients from the root become readily available for the following year's crop upon the decay of the radish, which can boost yields and reduce fertilizer costs. Daikons are also used as a forage worldwide. As a forage, they also have the side benefit of weed suppression. Although used elsewhere for much longer, daikon as a forage is a recent introduction in Massachusetts field practice. Other use Daikon is used in preparing metal surfaces for chemical patination, for example, under the Rokushō process.
Biology and health sciences
Brassicales
null
180638
https://en.wikipedia.org/wiki/Baikonur%20Cosmodrome
Baikonur Cosmodrome
The Baikonur Cosmodrome is a spaceport operated by Russia within Kazakhstan. Located in the Kazakh city of Baikonur, it is the largest operational space launch facility in terms of area. All Russian crewed spaceflights are launched from Baikonur. Situated in the Kazakh Steppe, some above sea level, it is to the east of the Aral Sea and north of the Syr Darya. It is close to Töretam, a station on the Trans-Aral Railway. Russia, as the official successor state to the Soviet Union, has retained control over the facility since 1991; it originally assumed this role through the post-Soviet Commonwealth of Independent States (CIS), but ratified an agreement with Kazakhstan in 2005 that allowed it to lease the spaceport until 2050. It is jointly managed by Roscosmos and the Russian Aerospace Forces. In 1955, the Soviet Ministry of Defence issued a decree and founded the Baikonur Cosmodrome. It was originally built as the chief base of operations for the Soviet space program. The Cosmodrome served as the launching point for Sputnik 1 and Vostok 1. The launchpad used for both missions was renamed "Gagarin's Start" in honour of Soviet cosmonaut Yuri Gagarin, who piloted Vostok 1 and became the first human in outer space. Under the current Russian management, Baikonur remains a busy spaceport, with numerous commercial, military, and scientific missions being launched annually. History Soviet era The Soviet government issued Scientific Research Test Range No. 5 (NIIP-5; ) on 12 February 1955. It was actually founded on 2 June 1955, originally a test center for the world's first intercontinental ballistic missile (ICBM), the R-7 Semyorka. NIIP-5 was soon expanded to include launch facilities for space flights. The site was selected by a commission led by General Vasily Voznyuk, influenced by Sergey Korolyov, the Chief Designer of the R-7 ICBM, and soon the man behind the Soviet space program. It had to be surrounded by plains, as the radio control system of the rocket required (at the time) receiving uninterrupted signals from ground stations hundreds of kilometres away. Additionally, the missile trajectory had to be away from populated areas. Also, it is advantageous to place space launch sites closer to the equator, as the surface of the Earth has higher rotational speed in such areas. Taking these constraints into consideration, the commission chose Tyuratam, a village in the heart of the Kazakh Steppe. The expense of constructing the launch facilities and the several hundred kilometres of new road and train lines made the Cosmodrome one of the most costly infrastructure projects undertaken by the Soviet Union. A supporting town was built around the facility to provide housing, schools, and infrastructure for workers. It was raised to city status in 1966 and named Leninsk (). The American U-2 high-altitude reconnaissance plane found and photographed the Tyuratam missile test range for the first time on 5 August 1957. In April of 1975, in preparation for the Apollo-Soyuz Test Project, the first NASA astronauts were allowed to tour the cosmodrome. Upon their return to the United States, the crews commented that on their evening flight to Moscow they had seen lights on launch pads and related complexes for more than 15 minutes, and according to astronaut Thomas Stafford, "that makes Cape Kennedy look very small." Name According to most sources, the name Baikonur was deliberately chosen in 1961 (around the time of Gagarin's flight) to misdirect the Western Bloc to a place about northeast of the launch center, the small mining town and railway station of Baikonur near Jezkazgan. Leninsk, the closed city built to support the cosmodrome, was renamed Baikonur on 20 December 1995 by Boris Yeltsin. According to NASA's history of the Apollo-Soyuz Test Project, the name Baikonur was not chosen to misdirect, but was the name of the Tyuratam region before the establishment of the cosmodrome. Environmental impact Russian scientist Afanasiy Ilich Tobonov researched mass animal deaths in the 1990s and concluded that the mass deaths of birds and wildlife in the Sakha Republic were noted only along the flight paths of space rockets launched from the Baikonur cosmodrome. Dead wildlife and livestock were usually incinerated, and the participants in these incinerations, including Tobonov himself, his brothers and inhabitants of his native village of Eliptyan, commonly died from stroke or cancer. In 1997, the Ministry of Defense of the Russian Federation changed the flight path and removed the ejected rocket stages near Nyurbinsky District, Russia. Scientific literature collected data that indicated adverse effects of rockets on the environment and the health of the population. UDMH, a fuel used in some Russian rocket engines, is highly toxic. It is one of the reasons for acid rains and cancers in the local population, near the cosmodrome. Valery Yakovlev, a head of the laboratory of ecosystem research of the State scientific-production union of applied ecology "Kazmechanobr", notes: "Scientists have established the extreme character of the destructive influence of the "Baikonur" space center on environment and population of the region: 11 000 tons of space scrap metal, polluted by especially toxic UDMH is still laying on the falling grounds". Scrap recovery is part of the local economy. Importance Many historic flights lifted off from Baikonur: the first operational ICBM; the first man-made satellite, Sputnik 1, on 4 October 1957; the first spacecraft to travel close to the Moon, Luna 1, on 2 January 1959; the first crewed and orbital flight by Yuri Gagarin on 12 April 1961; and the flight of the first woman in space, Valentina Tereshkova, in 1963. 14 cosmonauts of 13 other nations, including Czechoslovakia, East Germany, India and France have launched from Baikonur under the Interkosmos program as well. In 1960, a prototype R-16 ICBM exploded before launch, killing over 100 people. Baikonur is also the site from which Venera 9 and Mars 3 were launched. Post-Soviet era Following the dissolution of the Soviet Union in 1991, the Russian space program continued to operate from Baikonur under the auspices of the Commonwealth of Independent States. Russia wanted to sign a 99-year lease for Baikonur, but agreed to a US$115 million annual lease of the site for 20 years with an option for a 10-year extension. On 8 June 2005, the Russian Federation Council ratified an agreement between Russia and Kazakhstan extending Russia's rent term of the spaceport until 2050. The rent pricewhich remained fixed at per yearis the source of a long-running dispute between the two countries. In an attempt to reduce its dependency on Baikonur, Russia built the Vostochny Cosmodrome in Amur Oblast. Baikonur has been a major part of Russia's contribution to the International Space Station (ISS), as it is the only spaceport from which Russian missions to the ISS are launched. It is primarily the border's position (but to a lesser extent Baikonur's position at about the 46th parallel north) that led to the 51.6° orbital inclination of the ISS; the lowest inclination that can be reached by Soyuz boosters launched from Baikonur without flying over China. With the conclusion of NASA's Space Shuttle program in 2011, Baikonur became the sole launch site used for crewed missions to the ISS until the launch of Crew Dragon Demo-2 in 2020. In 2019, Gagarin's Start hosted three crewed launches, in March, July and September, before being shut down for modernisation for the new Soyuz-2 rocket with a planned first launch in 2023. The final launch from Gagarin's Start took place 25 September 2019. Gagarin's Start failed to receive funding (in part due to Russian invasion of Ukraine) to modernize it for the slightly larger Soyuz-2 rocket. In 2023, it was announced that the Russian and Kazakhstan authorities plan to deactivate the site as a space launch pad and turn it into a museum (in part for tourism purposes). On 7 March 2023, the Kazakh government seized control of the Baiterek launch complex, one of the launch sites at Baikonur Cosmodrome, banning numerous Russian officials from leaving the country and preventing the liquidation of assets by Roscosmos. One of the reasons for the seizure was due to Russia failing to pay a $29.7 million debt to the Kazakh government. The seizure comes after Russia's relations with Kazakhstan became tense due to its ongoing invasion of Ukraine. Features Baikonur is fully equipped with facilities for launching both crewed and uncrewed spacecraft. It has supported several generations of Russian spacecraft: Soyuz, Proton, Tsyklon, Dnepr, Zenit and Buran. Downrange from the launchpad, spent launch equipment is dropped directly on the ground in the Russian far east where it is salvaged by the workers and the local population. List of launchpads Pad 1/5 (Gagarin's Start) (1957–2019): R-7, Vostok, Voskhod, Molniya, Soyuz – Pad 31/6: R-7A, Vostok, Voskhod, Molniya, Soyuz, Soyuz-2 – Pad 41/3: R-16 (Destroyed in 1960 explosion) – Pad 41/4 : R-16 (1961–67) – Pad 41/15: R-16, Kosmos 3 (1963–68) – Pad 45/1: Zenit-2, Zenit-2M, Zenit-3M – Pad 45/2 (Destroyed in 1990 explosion): Zenit 2 – Pad 51: R-9 (1961–62) – Pad 60/6: R-16 (1963–66) – Pad 60/7: R-16 (1963–67) – Pad 60/8: R-16 (1962–66) – Pad 67/21: Tsyklon, R-36M, R-36O, MR-UR-100 Sotka (1963–72) – Pad 67/22: Tsyklon, R-36, R-36O (1964–66) – Pad 69: Tsyklon-2 Pad 70 (Destroyed in 1963 explosion): R-9 – Pad 75: R-9 – Pad 80/17: Tsyklon (1965) – Pad 81/23 (81L) (inactive >2004): Proton-K – Pad 81/24 (81P): Proton-K, Proton-M – Pad 90/19 (90L) (inactive >1997): UR-200, Tsyklon-2 – Pad 90/20 (90R) (inactive >2006): UR-200, Tsyklon-2 – Pad 101: R-36M (1973–76) – Pad 102: R-36M (1978) – Pad 103: R-36M (1973–77) – Pad 104: R-36M (1972–74) – Pad 105: R-36M (1974–77) – Pad 106: R-36M (1974–83) – Pad 107: R-36 – Pad 108: R-36 – Pad 109/95: R-36M, Dnepr – Pad 110/37 (110L) (inactive >1988): N-1, Energia-Buran – Pad 110/38 (110R) (inactive >1969): N-1 – Pad 130: UR-100 (1965) – Pad 131: UR-100N, UR-100, Rokot (1965–90) – Pad 132: UR-100NU (2001–02) – Pad 140/18: R-36 (1965–78) – Pad 141: R-36 – Pad 142/34: R-36 (three silo complex) – Pad 160: R-36O – Pad 161/35: Tsyklon (1967–73) – Pad 162/36: Tsyklon (1966–75) – Pad 163: R-36O – Pad 164: R-36O – Pad 165: R-36O – Pad 170: UR-MR-100 (1976–79) – Pad 171: UR-100, UR-100N – Pad 172: UR-MR-100 (1978–81) – Pad 173: UR-MR-100 (1972–78) – Pad 174: UR-100, UR-100K – Pad 175/2: UR-100NU, Rokot, Strela – Pad 175/59: Rokot, Strela – Pad 176: UR-100 – Pad 177: UR-MR-100, UR-MR-100U (1973–78) – Pad 178: UR-100 – Pad 179: UR-100 – Pad 181: UR-MR-100U (1978–79) – Pad 191/66: R-36O (1969–71) – Pad 192: R-36O – Pad 193: R-36O – Pad 194: R-36O – Pad 195: R-36O – Pad 196: R-36O – Pad 200/39 (200L): Proton-M/Proton-K – Pad 200/40 (200R): Proton-K (inactive >1991) – Pad 241: R-36O – Pad 242: R-36O – Pad 243: R-36O – Pad 244: R-36O – Pad 245: R-36O – Pad 246: R-36O – Pad 250 (inactive >1987): Energia – Buran facilities As part of the Buran programme, several facilities were adapted or newly built for the Buran-class space shuttle orbiters: Site 110 – Used for the launch of the Buran-class orbiters. Like the assembly and processing hall at Site 112, the launch complex was originally constructed for the Soviet lunar landing program and later converted for the Energia-Buran program. Site 112 – Used for orbiter maintenance and to mate the orbiters to their Energia launchers (thus fulfilling a role similar to the VAB at KSC). The main hangar at the site, called MIK RN or MIK 112, was originally built for the assembly of the N1 Moon rocket. After cancellation of the N-1 program in 1974, the facilities at Site 112 were converted for the Energia-Buran program. It was here that orbiter 1K was stored after the end of the Buran program and was destroyed when the hangar roof collapsed in 2002. Site 251 – Used as Buran orbiter landing facility, also known as Yubileyniy Airfield (and fulfilling a role similar to the SLF at KSC). It features one runway, called 06/24, which is long and wide, paved with "Grade 600" high quality reinforced concrete. At the edge of the runway were two special mate–demate devices; PUA-100 was designed to lift Buran orbiters and complete Energia stages onto the Antonov An-225 Mriya carrier aircraft and the smaller PKU-50 was used with the Myasishchev VM-T Atlant and incomplete orbiters or segments of the Energia core stage. After arrival on one of the transport aircraft, an orbiter was loaded onto a transporter, which would carry the orbiter to the processing building at Site 254. A purpose-built orbiter landing control facility, housed in a large multi-store office building, was located near the runway. Yubileyniy Airfield was also used to receive heavy transport planes carrying elements of the Energia-Buran system. After the end of the Buran program, Site 251 was abandoned but later reopened as a commercial cargo airport. Besides serving Baikonur, Kazakh authorities also use it for passenger and charter flights from Russia. Site 254 – Built to service the Buran-class orbiters between flights (thus fulfilling a role similar to the OPF at KSC). Constructed in the 1980s as a special four-bay building, it also featured a large processing area flanked by several floors of test rooms. After cancellation of the Buran program it was adapted for pre-launch operations of the Soyuz and Progress spacecraft. Intra-site railway All Baikonur's logistics are based on its own intra-site gauge railway network, which is the largest industrial railway on the planet. The railway is used for all stages of launch preparation, and all spacecraft are transported to the launchpads by the special Schnabel cars. Once part of the Soviet Railroad Troops, the Baikonur Railway is now served by a dedicated civilian state company. There are several rail links connecting the Baikonur Railway to the public railway of Kazakhstan and the rest of the world. On-site airports The Baikonur Cosmodrome has two on-site multi-purpose airports, serving both the personnel transportation needs and the logistics of space launches (including the delivery of the spacecraft by planes). There are scheduled passenger services from Moscow to the smaller Krayniy Airport , which however are not accessible to the public. The larger Yubileyniy Airport (Юбилейный аэропорт) was where the Buran orbiter was transported to Baikonur on the back of the Antonov An-225 Mriya cargo aircraft. ICBM testing Although Baikonur has always been known around the world as the launch site of Soviet and Russian space missions, from its outset in 1955 and until the collapse of the USSR in 1991 the primary purpose of this center was to test liquid-fueled ballistic missiles. The official (and secret) name of the center was State Test Range No. 5 or 5 GIK. It remained under the control of the Soviet and Russian Ministry of Defense until the second half of the 1990s, when the Russian civilian space agency and its industrial contractors started taking over individual facilities. In 2006, the head of Roscosmos, Anatoly Perminov, said that the last Russian military personnel would be removed from the Baikonur facility by 2007. However, on 22 October 2008, an SS-19 Stiletto missile was test-fired from Baikonur, indicating this may not be the case. Future projects On 22 December 2004, Kazakhstan and Russia signed a contract establishing the "Russia–Kazakhstan Baiterek JV" joint venture, in which each country holds a 50% stake. The goal of the project was the construction of the Bayterek ("poplar tree") space launch complex, to facilitate operations of the Russian Angara rocket launcher. This was anticipated to allow launches with a payload of 26 tons to low Earth orbit, compared to 20 tons using the Proton system. An additional benefit would be that the Angara uses kerosene as fuel and oxygen as the oxidiser, which is less hazardous to the environment than the toxic fuels used by older boosters. The total expenditure on the Kazakh side was expected to be US$223 million over 19 years. As of 2010, the project was stalling due to insufficient funding, but it was thought that the project still had good chances to succeed because it would allow both parties – Russia and Kazakhstan – to continue the joint use of Baikonur even after the construction of Vostochny Cosmodrome. As of 2017, the first launch of the Baiterek Rocket and Space Complex was expected to occur in 2025. Baikonur Museum The Baikonur Cosmodrome has a small museum, next to two small cottages, once residences of the rocket engineer Sergei Korolev and the first cosmonaut, Yuri Gagarin. Both cottages are part of the museum complex and have been preserved. The museum is home to a collection of space artefacts. A restored test article from the Soviet Buran programme sits next to the museum entrance. The only completed orbiter, which flew a single orbital test mission in 1988, was destroyed in a hangar collapse in 2002. For a complete list of surviving Buran vehicles and artefacts, see Buran programme § List of vehicles. The museum also houses photographs related to the cosmodrome's history, including images of all cosmonauts. Every crew of every expedition launched from Baikonur leaves behind a signed crew photograph that is displayed behind the glass. Baikonur's museum holds many objects related to Gagarin, including the ground control panel from his flight, his uniforms, and soil from his landing site, preserved in a silver container. One of the museum rooms also holds an older version of the Soyuz descent capsule. In 2021, the Baikonur space complex was named as one of the top 10 tourist destinations in Kazakhstan. In 2023, a plan was announced to add the Gagarin's Start launch complex to the museum complex at Baikonur.
Technology
Programs and launch sites
null
180735
https://en.wikipedia.org/wiki/Cart
Cart
A cart or dray (Australia and New Zealand) is a vehicle designed for transport, using two wheels and normally pulled by draught animals such as horses, donkeys, mules and oxen, or even smaller animals such as goats or large dogs. A handcart is pulled or pushed by one or more people. Over time, the word "cart" has expanded to mean nearly any small conveyance, including shopping carts, golf carts, go-karts, and UTVs, without regard to number of wheels, load carried, or means of propulsion. History The history of the cart is closely tied to the history of the wheel. Carts have been mentioned in literature as far back as the second millennium B.C. The first people to use the cart may have been Mesopotamians or early Eastern Europeans, such as the Yamnaya Culture (See history of the wheel for more information). Handcarts pushed by humans have been used around the world. Carts were often used for judicial punishments, both to transport the condemned – a public humiliation in itself (in Ancient Rome defeated leaders were often carried in the victorious general's triumph) – and even, in England until its substitution by the whipping post under Queen Elizabeth I, to tie the condemned to the cart-tail (the back part of a cart) and administer him or her a public whipping. Tumbrils were commonly associated with the French Revolution as a mobile stage elevating the condemned on the way to the guillotine: this was simply a continuation of earlier practice when they were used as the removable support in the gallows, before Albert Pierrepoint calculated the precise drop needed for instant severance of the spinal column. Human-powered carts Of the cart types not animal-drawn, perhaps the most common example today is the shopping cart (British English: shopping trolley), which has also come to have a metaphorical meaning in relation to online purchases (here, British English uses the metaphor of the shopping basket). Shopping carts first made their appearance in Oklahoma City in 1937. In golf, both manual push or pull and electric golf trolleys are designed to carry a golfer's bag, clubs and other equipment. Also, the golf cart, car, or buggy, is a powered vehicle that carries golfers and their equipment around a golf course faster and with less effort than walking. A Porter's trolley is a type of small, hand-propelled wheeled platform. This can also be called a baggage cart. Autocarts are a type of small, hand-propelled wheeled utility carts having a pivoting base for collapsible storage in vehicles. They eliminate the need for plastic or paper shopping bags and are also used by tradespersons to carry tools, equipment or supplies. A soap-box cart (also known as a billy cart, go-cart, trolley etc.) is a popular children's construction project on wheels, usually pedaled, but also intended for a test race. Similar, but more sophisticated are modern-day pedal cart toys used in general recreation and racing. The term "go-kart" (also shortened as "kart", an alternative spelling of "cart"), has existed since 1959, and refers to a tiny race car with a frame and two-stroke engine. The old term go-cart originally meant a sedan chair or an infant walker. Other carts: Rickshaw: Transport for humans. Pushcart: a cart that is pushed by one or more persons. AV cart: a cart used to traditionally used to transport audiovisual equipment such as televisions. In more recent years, has been used as a standing desk, especially in school administration. Baggage cart: pushed by travelers to carry individual luggage Serving cart: also known as pushcart or go-cart, is a handcart used for serving: Food cart: a mobile kitchen that is set up on the street to facilitate the sale and marketing of street food to people from the local pedestrian traffic. Food service cart: also named serving trolley, for serving the food in a restaurant Pastry cart: for serving pastry Tea cart: also named teacart or Chai Cart, tea trolley and tea wagon, for serving tea or other drinks Animal-powered carts Larger carts may be drawn by animals, such as horses, mules, and oxen. They have been in continuous use since the invention of the wheel, in the 4th millennium BC. Carts may be named for the animal that pulls them, such as horsecart or oxcart. In modern times, horsecarts are used in competition while draft horse showing. A dogcart, however, is usually a cart designed to carry hunting dogs: an open cart with two cross-seats back to back; the dogs could be penned between the rear-facing seat and the back end. The term "cart" (synonymous in this sense with chair) is also used for various kinds of lightweight, two-wheeled carriages, some of them sprung carts (or spring carts), especially those used as open pleasure or sporting vehicles. They could be drawn by a horse, pony or dog. Examples include: Cocking cart: short-bodied, high, two-wheeled, seat for a groom behind the box; for tandem driving Dogcart: light, usually one horse, commonly two-wheeled and high, two transverse seats set back to back Donkey cart: underslung axle, two lengthwise seats; also called pony cart, tub-cart Float: a dropped axle to give an especially low load bed, for carrying heavy or unstable items such as milk churns. The name survives today as a milkfloat. Governess cart: light, two-wheeled, entered from the rear, body partly or wholly of wickerwork, seat for two persons along each side; also called governess car, tub-cart Ralli car: light, two-wheeled, horse-drawn, for two persons facing forward, or four, two facing forward and two rearward. The seat is adjustable fore-and-aft to keep the vehicle balanced for two or four people. Stolkjaerre: two-wheeled, front seat for two, rear seat for the driver; used in Norway Tax cart: spring cart, formerly subject to a small tax in England; also called taxed cart Whitechapel cart: spring cart, light, two-wheeled, especially for family or light delivery service The builder of a cart may be known as a cartwright; the surname "Carter" also derives from the occupation of transporting goods by cart or wagon. Carts have many different shapes, but the basic idea of transporting material (or maintaining a collection of materials in a portable fashion) remains. Carts may have a pair of shafts, one along each side of the draught animal that supports the forward-balanced load in the cart. The shafts are supported by a saddle on the horse. Alternatively (and normally where the animals are oxen or buffalo), the cart may have a single pole between a pair of animals. The draught traces attach to the axle of the vehicle or to the shafts. The traces are attached to a collar (on horses), to a yoke (on other heavy draught animals) or to a harness on dogs or other light animals. Traces are made from a range of materials depending on the load and frequency of use. Heavy draught traces are made from iron or steel chain. Lighter traces are often leather and sometimes hemp rope, but plaited horse-hair and other similar decorative materials can be used. The dray is often associated with the transport of barrels.
Technology
Animal-powered transport
null
180870
https://en.wikipedia.org/wiki/Color%20charge
Color charge
Color charge is a property of quarks and gluons that is related to the particles' strong interactions in the theory of quantum chromodynamics (QCD). Like electric charge, it determines how quarks and gluons interact through the strong force; however, rather than there being only positive and negative charges, there are three "charges", commonly called red, green, and blue. Additionally, there are three "anti-colors", commonly called anti-red, anti-green, and anti-blue. Unlike electric charge, color charge is never observed in nature: in all cases, red, green, and blue (or anti-red, anti-green, and anti-blue) or any color and its anti-color combine to form a "color-neutral" system. For example, the three quarks making up any baryon universally have three different color charges, and the two quarks making up any meson universally have opposite color charge. The "color charge" of quarks and gluons is completely unrelated to the everyday meaning of color, which refers to the frequency of photons, the particles that mediate a different fundamental force, electromagnetism. The term color and the labels red, green, and blue became popular simply because of the loose but convenient analogy to the primary colors. History Shortly after the existence of quarks was proposed by Murray Gell-Mann and George Zweig in 1964, color charge was implicitly introduced the same year by Oscar W. Greenberg. In 1965, Moo-Young Han and Yoichiro Nambu explicitly introduced color as a gauge symmetry. Han and Nambu initially designated this degree of freedom by the group SU(3), but it was referred to in later papers as "the three-triplet model". One feature of the model (which was originally preferred by Han and Nambu) was that it permitted integrally charged quarks, as well as the fractionally charged quarks initially proposed by Zweig and Gell-Mann. Somewhat later, in the early 1970s, Gell-Mann, in several conference talks, coined the name color to describe the internal degree of freedom of the three-triplet model, and advocated a new field theory, designated as quantum chromodynamics (QCD) to describe the interaction of quarks and gluons within hadrons. In Gell-Mann's QCD, each quark and gluon has fractional electric charge, and carries what came to be called color charge in the space of the color degree of freedom. Red, green, and blue In quantum chromodynamics (QCD), a quark's color can take one of three values or charges: red, green, and blue. An antiquark can take one of three anticolors: called antired, antigreen, and antiblue (represented as cyan, magenta, and yellow, respectively). Gluons are mixtures of two colors, such as red and antigreen, which constitutes their color charge. QCD considers eight gluons of the possible nine color–anticolor combinations to be unique; see eight gluon colors for an explanation. All three colors mixed together, all three anticolors mixed together, or a combination of a color and its anticolor is "colorless" or "white" and has a net color charge of zero. Due to a property of the strong interaction called color confinement, free particles must have a color charge of zero. A baryon is composed of three quarks, which must be one each of red, green, and blue colors; likewise an antibaryon is composed of three antiquarks, one each of antired, antigreen and antiblue. A meson is made from one quark and one antiquark; the quark can be any color, and the antiquark has the matching anticolor. The following illustrates the coupling constants for color-charged particles: Field lines from color charges Analogous to an electric field and electric charges, the strong force acting between color charges can be depicted using field lines. However, the color field lines do not arc outwards from one charge to another as much, because they are pulled together tightly by gluons (within 1 fm). This effect confines quarks within hadrons. Coupling constant and charge In a quantum field theory, a coupling constant and a charge are different but related notions. The coupling constant sets the magnitude of the force of interaction; for example, in quantum electrodynamics, the fine-structure constant is a coupling constant. The charge in a gauge theory has to do with the way a particle transforms under the gauge symmetry; i.e., its representation under the gauge group. For example, the electron has charge −1 and the positron has charge +1, implying that the gauge transformation has opposite effects on them in some sense. Specifically, if a local gauge transformation is applied in electrodynamics, then one finds (using tensor index notation): where is the photon field, and is the electron field with (a bar over denotes its antiparticle — the positron). Since QCD is a non-abelian theory, the representations, and hence the color charges, are more complicated. They are dealt with in the next section. Quark and gluon fields In QCD the gauge group is the non-abelian group SU(3). The running coupling is usually denoted by . Each flavour of quark belongs to the fundamental representation (3) and contains a triplet of fields together denoted by . The antiquark field belongs to the complex conjugate representation (3*) and also contains a triplet of fields. We can write  and  The gluon contains an octet of fields (see gluon field), and belongs to the adjoint representation (8), and can be written using the Gell-Mann matrices as (there is an implied summation over a = 1, 2, ... 8). All other particles belong to the trivial representation (1) of color SU(3). The color charge of each of these fields is fully specified by the representations. Quarks have a color charge of red, green or blue and antiquarks have a color charge of antired, antigreen or antiblue. Gluons have a combination of two color charges (one of red, green, or blue and one of antired, antigreen, or antiblue) in a superposition of states that are given by the Gell-Mann matrices. All other particles have zero color charge. The gluons corresponding to and are sometimes described as having "zero charge" (as in the figure). Formally, these states are written as and While "colorless" in the sense that they consist of matched color-anticolor pairs, which places them in the centre of a weight diagram alongside the truly colorless singlet state, they still participate in strong interactions - in particular, those in which quarks interact without changing color. Mathematically speaking, the color charge of a particle is the value of a certain quadratic Casimir operator in the representation of the particle. In the simple language introduced previously, the three indices "1", "2" and "3" in the quark triplet above are usually identified with the three colors. The colorful language misses the following point. A gauge transformation in color SU(3) can be written as , where is a matrix that belongs to the group SU(3). Thus, after gauge transformation, the new colors are linear combinations of the old colors. In short, the simplified language introduced before is not gauge invariant. Color charge is conserved, but the book-keeping involved in this is more complicated than just adding up the charges, as is done in quantum electrodynamics. One simple way of doing this is to look at the interaction vertex in QCD and replace it by a color-line representation. The meaning is the following. Let represent the th component of a quark field (loosely called the th color). The color of a gluon is similarly given by , which corresponds to the particular Gell-Mann matrix it is associated with. This matrix has indices and . These are the color labels on the gluon. At the interaction vertex one has . The color-line representation tracks these indices. Color charge conservation means that the ends of these color lines must be either in the initial or final state, equivalently, that no lines break in the middle of a diagram. Since gluons carry color charge, two gluons can also interact. A typical interaction vertex (called the three gluon vertex) for gluons involves g + g → g. This is shown here, along with its color-line representation. The color-line diagrams can be restated in terms of conservation laws of color; however, as noted before, this is not a gauge invariant language. Note that in a typical non-abelian gauge theory the gauge boson carries the charge of the theory, and hence has interactions of this kind; for example, the W boson in the electroweak theory. In the electroweak theory, the W also carries electric charge, and hence interacts with a photon.
Physical sciences
Quantum numbers
Physics
180930
https://en.wikipedia.org/wiki/Geology%20of%20the%20Appalachians
Geology of the Appalachians
The geology of the Appalachians dates back more than 1.2 billion years to the Mesoproterozoic era when two continental cratons collided to form the supercontinent Rodinia, 500 million years prior to the development of the range during the formation of Pangea. The rocks exposed in today's Appalachian Mountains reveal elongate belts of folded and thrust faulted marine sedimentary rocks, volcanic rocks, and slivers of ancient ocean floor—strong evidences that these rocks were deformed during plate collision. The birth of the Appalachian ranges marks the first of several mountain building plate collisions that culminated in the construction of Pangea with the Appalachians and neighboring Anti-Atlas mountains (now in Morocco) near the center. These mountain ranges likely once reached elevations similar to those of the Alps and the Rocky Mountains before they were eroded. Geological history Overview The Appalachian Mountains formed through a series of mountain-building events over the last 1.2 billion years: The Grenville orogeny began 1250 million years ago (Ma) and lasted for 270 million years. The Taconic orogeny began 450 Ma and lasted for 10 million years. The Acadian orogeny began 375 Ma and lasted 50 million years. The Alleghanian orogeny began 325 Ma and lasted 65 million years. Proterozoic era Grenville orogeny The first mountain-building tectonic plate collision that initiated the construction of what are today the Appalachian Mountains occurred during the Mesoproterozoic era at least one billion years ago when the pre-North-American craton called Laurentia collided with other continental segments, notably Amazonia. All the other cratons of the Earth also collided at about this time to form the supercontinent Rodinia, which was surrounded by one single ocean. Mountain-building referred to as the Grenville orogeny occurred along the boundaries of the cratons. The present Appalachian Mountains have at least two areas which are made from rock formations that were formed during this orogeny: the Blue Ridge Mountains and the Adirondacks. Breakup of Rodinia After the Grenville orogeny, the direction of the continental drift reversed, and Rodinia began to break up. The mountains formed during the Grenvillian era underwent erosion from weathering, glaciation, and other natural processes, resulting in the leveling of the landscape. The eroded sediments from these mountains contributed to the formation of sedimentary basins and valleys. For example, in what is now the southern United States, the Ocoee basin was formed. Seawater filled the basin. Rivers from the surrounding countryside carried clay, silt, sand, and gravel to the basin, much as rivers today carry sediment from the midcontinent region to the Gulf of Mexico. The sediment spread out in layers on the basin floor. The basin continued to subside, and over a long period of time, probably millions of years, a great thickness of sediment accumulated. Eventually, the tectonic forces pulling the two continents apart became so strong that the Iapetus Ocean formed off the eastern coast of the Laurentian margin. The rocks of the Valley and Ridge province formed over millions of years, in the Iapetus. Shells and other hard parts of ancient marine plants and animals accumulated to form limey deposits that later became limestone. This is the same process by which limestone forms in modern oceans. The weathering of limestone exposed at the land surface produces the lime-rich soils that are so prevalent in the fertile farmland of the Valley and Ridge province. During this continental break-up, around 600 million to 560 million years ago, volcanic activity was present along the tectonic margins. There is evidence of this activity in today's Blue Ridge Mountains. Mount Rogers, Whitetop Mountain, and Pine Mountain are all the result of volcanic activity that occurred around this time. Evidence of subsurface activity (dikes and sills intruding into the overlying rock) is present in the Blue Ridge as well. For instance, mafic rocks have been found along the Fries Fault in the central Blue Ridge area of Montgomery County, Virginia. Paleozoic era During the earliest part of the Paleozoic, the continent that would later become North America straddled the equator. The Appalachian region was a passive plate margin, not unlike today's Atlantic Coastal Plain province. During this interval, the region was periodically submerged beneath shallow seas. Thick layers of sediment and carbonate rock were deposited on the shallow sea bottom when the region was submerged. When seas receded, terrestrial sedimentary deposits and erosion dominated. During the middle Ordovician (about 458-470 million years ago), a change in plate motions set the stage for the first Paleozoic mountain building event (Taconic orogeny) in North America. The once quiet Appalachian passive margin changed to a very active plate boundary when a neighboring oceanic crust, the Iapetus, collided with and began sinking beneath the North American craton. With the creation of this new subduction zone, the early Appalachians were born. Volcanoes grew along the continental margin, coincident with the initiation of subduction. Thrust faulting uplifted and warped older sedimentary rock laid down on the passive margin. As mountains rose, erosion began to wear them down. Streams carried rock debris downslope to be deposited in nearby lowlands. Mountain building continued periodically throughout the next 250 million years (the Caledonian, Acadian, Ouachita, Hercynian, and Alleghanian orogenies). Continent after continent was thrust and sutured onto the North American craton as Pangea began to take shape. Microplates, smaller bits of crust too small to be called continents, were swept in one by one to be welded to the growing mass. By about 300 million years ago (the Pennsylvanian period), Africa was approaching the North American craton. The collisional belt spread into the Ozark-Ouachita region and through the Marathon Mountains area of Texas. Continental collisions raised the Appalachian-Ouachita chain to a lofty mountain range on the scale of the present-day Himalayas. The massive bulk of Pangea was completed near the end of the Paleozoic era (the Permian period) when Africa (Gondwana) plowed into the continental agglomeration, with the Appalachian-Ouachita mountains near the middle. Mesozoic era and later Pangea began to break up about 220 million years ago, in the early Mesozoic (late Triassic period). As Pangea rifted apart a new passive tectonic margin was born, and the forces that created the Appalachian, Ouachita, and Marathon Mountains were stilled. Weathering and erosion prevailed, and the mountains began to wear away. By the end of the Mesozoic, the Appalachian Mountains had been eroded to an almost-flat plain. It was not until the region was uplifted during the Cenozoic era that the distinctive topography of the present formed. Uplift rejuvenated the streams, which rapidly responded by cutting downward into the ancient bedrock. Some streams flowed along weak layers that define the folds and faults created many millions of years earlier. Other streams downcut so rapidly that they cut right across the resistant folded rocks of the mountain core, carving canyons across rock layers and geologic structures. The ridges of the Appalachian Mountain core represent erosion-resistant rock that remained after the rock above and beside it was eroded away. Physiographic provinces The geographic boundaries of the Appalachian Mountains follow a definition that accounts for all the land mass in the United States and Canada used by the US Geological Survey and the Geologic Survey of Canada using the science of physiography. The US uses the term Appalachian Highlands, and Canada uses the term Appalachian Uplands, to define contiguous regions that have similar geology, topography, history, and native plant and animal communities. (The Appalachian Mountains are not synonymous with the Appalachian Plateau, which is one of the provinces of the Appalachian Highlands). Appalachian Basin The Appalachian Basin is a foreland basin containing Paleozoic sedimentary rocks of early Cambrian through early Permian age. From north to south, the Appalachian Basin province crosses New York, Pennsylvania, eastern Ohio, West Virginia, western Maryland, eastern Kentucky, western Virginia, eastern Tennessee, northwestern Georgia, and northeastern Alabama. The northern end of the Appalachian Basin extends offshore into Lakes Erie and Ontario as far as the United States–Canada border. The province covers an area of about and is long from northeast to southwest and between wide from northwest to southeast. The northwestern flank of the basin is a broad homocline that dips gently southeastward off the Cincinnati Arch. A complexly thrust faulted and folded terrane (Appalachian Fold and Thrust Belt or Eastern Overthrust Belt), formed at the end of the Paleozoic by the Alleghanian orogeny, characterizes the eastern flank of the basin. Metamorphic and igneous rocks of the Blue Ridge Thrust Belt that bounds the eastern part of the Appalachian Basin Province were thrust westward more than over lower Paleozoic sedimentary rocks. Coal, oil, and gas production The Appalachian Basin is one of the most important coal producing regions in the U.S. and one of the largest in the world. Bituminous coal has been mined throughout the last three centuries. Currently, the coal primarily is used within the eastern U.S. or exported for electrical power generation, but some of it is suitable for metallurgical uses. Economically important coal beds were deposited primarily during Pennsylvanian time in a southeastward-thickening foreland basin. Coal and associated rocks form a clastic wedge that thickens from north to south, from Pennsylvania into southeast West Virginia and southwestern Virginia. Discovery of oil in 1859 in the Drake Well, Venango County, Pennsylvania, marked the beginning of the oil and gas industry in the Appalachian Basin. The discovery well opened a prolific trend of oil and gas fields, producing from upper Devonian, Mississippian, and Pennsylvanian sandstone reservoirs that extend from southern New York, across western Pennsylvania, central West Virginia, and eastern Ohio, to eastern Kentucky. A second major trend of oil and gas production in the Appalachian Basin began with the discovery in 1885 of oil and gas in lower Silurian Clinton sandstone reservoirs in Knox County, Ohio. By the late 1880s and early 1900s, the trend extended both north and south across east-central Ohio and included several counties in western New York where gas was discovered in lower Silurian Medina Group sandstones. About 1900, large oil reserves were discovered in Silurian and Devonian carbonate reservoirs in east-central Kentucky. Important gas discoveries from the lower Devonian Oriskany Sandstone in Guernsey County, Ohio, in 1924; Schuyler County, New York, in 1930; and Kanawha County, West Virginia, in 1936 opened a major gas-producing trend across parts of New York, Pennsylvania, Maryland, Ohio, West Virginia, Kentucky, and Virginia. Another drilling boom occurred in the 1960s in Morrow County, Ohio, where oil was discovered in the Upper Cambrian part of the Knox Dolomite. Crystalline Appalachians The Blue Ridge, Piedmont, Adirondack, and New England Provinces are collectively known as the Crystalline Appalachians because they consist of Precambrian and Cambrian igneous and metamorphic rocks. The Blue Ridge Thrust Belt Province underlies parts of eight states from central Alabama to southern Pennsylvania. Along its western margin, the Blue Ridge is thrust over the folded and faulted margin of the Appalachian basin, so that a broad segment of Paleozoic strata extends eastward for tens of miles, buried beneath these subhorizontal crystalline thrust sheets. At the surface, the Blue Ridge consists of a mountainous to hilly region, the main component of which are the Blue Ridge Mountains that extend from Georgia to Pennsylvania. Surface rocks consist mainly of a core of moderate-to high-rank crystalline metamorphic or igneous rocks which, because of their superior resistance to weathering and erosion, commonly rise above the adjacent areas of low-grade metamorphic and sedimentary rock. The province is bounded on the north and west by the Paleozoic strata of the Appalachian Basin and on the south by Cretaceous and younger sedimentary rocks of the Gulf Coastal Plain. It is bounded on the east by metamorphic and sedimentary rocks of the Piedmont Province. The Adirondack and New England Provinces include sedimentary, meta-sedimentary, and plutonic igneous rocks, mainly of Cambrian and Ordovician age, similar lithologically to rocks in the Blue Ridge and Piedmont Provinces to the south. The uplifted, nearly-circular Adirondack Mountains consist of a core of ancient Precambrian rocks that are surrounded by upturned Cambrian and Ordovician sedimentary rocks.
Physical sciences
Geologic features
Earth science
181146
https://en.wikipedia.org/wiki/Attack%20aircraft
Attack aircraft
An attack aircraft, strike aircraft, or attack bomber is a tactical military aircraft that has a primary role of carrying out airstrikes with greater precision than bombers, and is prepared to encounter strong low-level air defenses while pressing the attack. This class of aircraft is designed mostly for close air support and naval air-to-surface missions, overlapping the tactical bomber mission. Designs dedicated to non-naval roles are often known as ground-attack aircraft. Fighter aircraft often carry out the attack role, although they would not be considered attack aircraft per se; fighter-bomber conversions of those same aircraft would be considered part of the class. Strike fighters, which have effectively replaced the fighter-bomber and light bomber concepts, also differ little from the broad concept of an attack aircraft. The dedicated attack aircraft as a separate class existed primarily during and after World War II. The precise implementation varied from country to country, and was handled by a wide variety of designs. In the United States and Britain, attack aircraft were generally light bombers or medium bombers, sometimes carrying heavier forward-firing weapons like the North American B-25G Mitchell and de Havilland Mosquito Tsetse. In Germany and the USSR, where they were known as Schlachtflugzeug ("battle aircraft") or sturmovik ("storm trooper") respectively, this role was carried out by purpose-designed and heavily armored aircraft such as the Henschel Hs 129 and Ilyushin Il-2. The Germans and Soviets also used light bombers in this role: cannon-armed versions of the Junkers Ju 87 Stuka greatly outnumbered the Hs 129, while the Petlyakov Pe-2 was used for this role in spite of not being specifically designed for it. In the latter part of World War II, the fighter-bomber began to take over many attack roles, a transition that continued in the post-war era. Jet-powered examples were relatively rare but not unknown, such as the Blackburn Buccaneer. The U.S. Navy continued to introduce new aircraft in their A-series, but these were mostly similar to light and medium bombers. The need for a separate attack aircraft category was greatly diminished by the introduction of precision-guided munitions which allowed almost any aircraft to carry out this role while remaining safe at high altitude. Attack helicopters also have overtaken many remaining roles that could only be carried out at lower altitudes. Since the 1960s, only two dedicated attack aircraft designs have been widely introduced, the American Fairchild Republic A-10 Thunderbolt II and the Soviet/Russian Sukhoi Su-25 Frogfoot. A variety of light attack aircraft has also been introduced in the post-World War II era, usually based on adapted trainers or other light fixed-wing aircraft. These have been used in counter-insurgency operations. Definition and designations United States definition and designations U.S. attack aircraft are currently identified by the prefix A-, as in "A-6 Intruder" and "A-10 Thunderbolt II". However, until the end of World War II the A- designation was shared between attack planes and light bombers for USAAF aircraft (as opposed to B- prefix for medium or heavy bombers). The US Navy used a separate designation system and at the time preferred to call similar aircraft scout bombers (SB) or torpedo bombers (TB or BT). For example, Douglas SBD Dauntless scout bomber was designated A-24 when used by the USAAF. It was not until 1946, when the US Navy and US Marine Corps started using the "attack" (A) designation, when it renamed BT2D Skyraider and BTM Mauler to, respectively, AD Skyraider and AM Mauler. As with many aircraft classifications, the definition of attack aircraft is somewhat vague and has tended to change over time. Current U.S. military doctrine defines it as an aircraft which most likely performs an attack mission, more than any other kind of mission. Attack mission means, in turn, specifically tactical air-to-ground action—in other words, neither air-to-air action nor strategic bombing is considered an attack mission. In United States Navy vocabulary, the alternative designation for the same activity is a strike mission. Attack missions are principally divided into two categories: air interdiction and close air support. In the last several decades, the rise of the ubiquitous multi-role fighter has created some confusion about the difference between attack and fighter aircraft. According to the current U.S. designation system, an attack aircraft (A) is designed primarily for air-to-surface (Attack: Aircraft designed to find, attack, and destroy land or sea targets)<ref name=16-401-I>[http://www.af.mil/shared/media/epubs/AFI16-401%28I%29.pdf Designating and Naming Defense Military Aerospace Vehicles 2005.] </ref> missions (also known as "attack missions"), while a fighter category F incorporates not only aircraft designed primarily for air-to-air combat, but additionally multipurpose aircraft designed also for ground-attack missions. "F" - Fighter Aircraft were designed to intercept and destroy other aircraft or missiles. This includes multipurpose aircraft also designed for ground support missions such as interdiction and close air support. Just to mention one example amongst many, the F-111 "Aardvark" was designated F despite having only minimal air-to-air capabilities. Only a single aircraft in the USAF's current inventory bears a simple, unmixed "A" designation: the A-10 Thunderbolt II. Other designations British designations have included FB for fighter-bomber and more recently "G" for "Ground-attack" as in Harrier GR1 (meaning "Ground-attack/Reconnaissance, Mark 1"). Imperial Japanese Navy designation use "B" to designate carrier attack bomber such as the Nakajima B5N Type-97 bomber although these aircraft are mostly used for torpedo attack and level bombing. They also use "D" to specifically designate carrier dive bomber like the Yokosuka D4Y Suisei.Francillon 1970, pp.50–51. However by the end of the world war II, the IJN introduced the Aichi B7A Ryusei which could performed both torpedo bombing and dive bombing rendering the "D" designation redundant. The NATO reporting names for Soviet/Russian ground-attack aircraft at first started with "B" categorizing them as bombers, as in case of Il-10 'Beast'. But later they were usually classified as fighters ("F")—possibly because (since Sukhoi Su-7) they were similar in size and visual appearance to Soviet fighters, or were simply derivatives of such. In the PLAAF, ground-attack aircraft are given the designation "Q". So far this has only been given to the Nanchang Q-5. History World War I The attack aircraft as a role was defined by its use during World War I, in support of ground forces on battlefields. Battlefield support is generally divided into close air support and battlefield air interdiction, the first requiring strict and the latter only general cooperation with friendly surface forces. Such aircraft also attacked targets in rear areas. Such missions required flying where light anti-aircraft fire was expected and operating at low altitudes to precisely identify targets. Other roles, including those of light bombers, medium bombers, dive bombers, reconnaissance, fighters, fighter-bombers, could and did perform air strikes on battlefields. All these types could significantly damage ground targets from a low level flight, either by bombing, machine guns, or both. Attack aircraft came to diverge from bombers and fighters. While bombers could be used on a battlefield, their slower speeds made them extremely vulnerable to ground fire, as did the lighter construction of fighters. The survivability of attack aircraft was guaranteed by their speed/power, protection (i.e. armor panels) and strength of construction; Germany was the first country to produce dedicated ground-attack aircraft (designated CL-class and J-class). They were put into use in autumn 1917, during World War I. Most notable was the Junkers J.I, which pioneered the idea of an armored "bathtub", that was both fuselage structure and protection for engine and crew. The British experimented with the Sopwith TF series (termed "trench fighters"), although these did not see combat. The last battles of 1918 on the Western Front demonstrated that ground-attacking aircraft were a valuable component of all-arms tactics. Close support ground strafing (machine-gunning) and tactical bombing of infantry (especially when moving between trenches and along roads), machine gun posts, artillery, and supply formations was a part of the Allied armies' strength in holding German attacks and supporting Allied counter-attacks and offensives. Admittedly, the cost to the Allies was high, with the Royal Flying Corps sustaining a loss rate approaching 30% among ground-attack aircraft. 1919–1939 After World War I, it was widely believed that using aircraft against tactical targets was of little use other than in harassing and undermining enemy morale; attacking combatants was generally much more dangerous to aircrews than their targets, a problem that was continually becoming more acute with the ongoing refinement of anti-aircraft weapons. Within the range of types serving attack roles, dive bombers were increasingly being seen as more effective than aircraft designed for strafing with machine guns or cannons. Nevertheless, during the 1920s, the US military, in particular, procured specialized "Attack" aircraft and formed dedicated units, that were trained primarily for that role. The US Army Engineering Division became involved in designing ground attack aircraft. The 1920 Boeing GA-1 was an armored twin-engine triplane for ground strafing with eight machine guns and about a ton of armor plate, and the 1922 Aeromarine PG-1 was a combined pursuit (fighter) and ground attack design with a 37mm gun. The United States Marine Corps Aviation applied close air support tactics in the Banana Wars. While they did not pioneer dive bombing tactics, Marine aviators were the first to include it in their doctrine during the United States occupation of Haiti and Nicaragua. The United States Army Air Corps was notable for its creation of a separate "A-" designation for attack types, distinct from and alongside "B-" for bomber types and "P-" for pursuit (later replaced by "F-" for fighter) aircraft. The first designated attack type to be operational with the USAAC was the Curtiss A-2 Falcon. Nevertheless, such aircraft, including the A-2's replacement, the Curtiss A-12 Shrike, were unarmored and highly vulnerable to AA fire. The British Royal Air Force focused primarily on strategic bombing, rather than ground attack. However, like most air arms of the period it did operate attack aircraft, named Army Cooperation in RAF parlance, which included the Hawker Hector, Westland Lysander and others. Aviation played a role in the Brazilian Constitutionalist Revolution of 1932, although both sides had few aircraft. The federal government had approximately 58 aircraft divided between the Navy and the Army, as the Air Force at this time did not constitute an independent branch. In contrast, the rebels had only two Potez 25 planes and two Waco CSO, plus a small number of private aircraft. During the 1930s, Nazi Germany had begun to field a class of Schlacht ("battle") aircraft, such as the Henschel Hs 123. Moreover, the experiences of German Condor Legion during the Spanish Civil War, against an enemy with few fighter aircraft, changed ideas about ground attack. Though equipped with generally unsuitable designs such as the Henschel Hs 123 and cannon-armed versions of the Heinkel He 112, their armament and pilots proved that aircraft were a very effective weapon, even without bombs. This led to some support within the Luftwaffe for the creation of an aircraft dedicated to this role, resulting in tenders for a new "attack aircraft". This led to the introduction (in 1942) of a unique single-seat, twin-engine attack aircraft, the slow-moving but heavily armored and formidably armed Henschel Hs 129 Panzerknacker ("Safecracker" /"Tank Cracker"). In Japan, the Imperial Japanese Navy had developed the Aichi D3A dive bomber (based on the Heinkel He 70) and the Mitsubishi B5M light attack bomber. Both, like their US counterparts, were lightly armored types, and were critically reliant on surprise attacks and the absence of significant fighter or AA opposition. During the Winter War, the Soviet Air Forces used the Polikarpov R-5SSS, and Polikarpov R-ZSh, as attack aircraft. Perhaps the most notable attack type to emerge during the late 1930s was the Soviet Ilyushin Il-2 Sturmovik, which became the most-produced military aircraft type in history. As World War II approached, the concept of an attack aircraft was not well defined, and various air services used many different names for widely differing types, all performing similar roles (sometimes in tandem with non-attack roles of bombers, fighters, reconnaissance and other roles. Army co-operation The British concept of a light aircraft mixing all the roles that required extensive communication with land forces: reconnaissance, liaison, artillery spotting, aerial supply, and, last but not least, occasional strikes on the battlefield.Hallion 2010, p. 152. The concept was similar to front-line aircraft used in the World War I, which was called the CL class in the German Empire. Eventually the RAF's experience showed types such as Westland Lysander to be unacceptably vulnerable and it was replaced by faster fighter types for photo-reconnaissance, and light aircraft for artillery spotting. Light bomber During the inter-war period, the British flew the Fairey Battle, a light bomber which originated in a 1932 specification. Designs in 1938 for a replacement were adapted as a target tug. The last British specification issued for a light bomber was B.20/40 described as a "Close Army Support Bomber" capable of dive bombing and photo-reconnaissance. However, the specification was dropped before an aircraft went into production. Dive bomber In some air services, dive bombers did not equip ground-attack units, but were treated as a separate class. In Nazi Germany, the Luftwaffe distinguished between the Stuka (Sturzkampf-, "dive bombing") units, equipped with Junkers Ju 87 from Schlacht ("battle") units, using strafing/low-level bombing types such as the Henschel Hs 123). Fighter-bomber Although not a synonymous class with ground-attack aircraft, fighter-bombers were usually used for the role, and proved to excel at it, even when they were only lightly armored. The Royal Air Force and United States Army Air Forces relegated obsolescent fighters to this role, while cutting-edge fighters would serve as interceptors and establish air superiority. The United States Navy, in distinction to the USAAF, preferred the older term "Scout-Bomber", under a "SB-" designation, such as the Curtiss SB2C Helldiver. World War II The Junkers Ju 87s of the German Luftwaffe became virtually synonymous with close air support during the early months of World War II. The British Commonwealth's Desert Air Force, led by Arthur Tedder, became the first Allied tactical formation to emphasize the attack role, usually in the form of single-engine Hawker Hurricane and Curtiss P-40 fighter-bombers or specialized "tank-busters", such as the Hurricane Mk IID, armed with two 40 mm Vickers S guns (notably No. 6 Squadron RAF). At around the same time, a massive invasion by Axis forces had forced the Soviet air forces to quickly expand their army support capacity, such as the Ilyushin Il-2 Sturmovik. The women pilots known as the "Night Witches" utilised an obsolescent, wooden light trainer biplane type, the Polikarpov Po-2 and small anti-personnel bombs in "harassment bombing" attacks that proved difficult to counter. Wartime experience showed that poorly armored and/or lightly built, pre-war types were unacceptably vulnerable, especially to fighters. Nevertheless, skilled crews could be highly successful in those types, such as the leading Stuka ace, Hans-Ulrich Rudel, who claimed 500 tanks, a battleship, a cruiser, and two destroyers in 2,300 combat missions. The Bristol Beaufighter, based on an obsolescent RAF bomber, became a versatile twin-engine attack aircraft and served in almost every theatre of the war, in the maritime strike and ground attack roles as well as that of night fighter. Conversely, some mid-war attack types emerged as adaptations of fighters, including several versions of the German Focke-Wulf Fw 190, the British Hawker Typhoon and the US Republic P-47 Thunderbolt. The Typhoon, which was disappointing as a fighter, due to poor high altitude performance, was very fast at low altitudes and thus became the RAF's premier ground attack fighter. It was armed with four 20mm cannon, augmented first with bombs, then rockets. Likewise the P-47 was designed and intended for use as a high altitude bomber escort, but gradually found that role filled by the North American P-51 Mustang (because of its much longer range and greater maneuverability). The P-47 was also heavier and more robust than the P-51 and regarded therefore, as an "energy fighter": ideal for high-speed dive-and-climb tactics, including strafing attacks. Its armament of eight 0.50 caliber machine guns was effective against Axis infantry and light vehicles in both Europe and the Pacific. While machine guns and cannon were initially sufficient, the evolution of well-armored tanks required heavier weapons. To augment bombs, high explosive rockets were introduced, although these unguided projectiles were still "barely adequate" because of their inaccuracy. For the British RP3, one hit per sortie was considered acceptable. However, even a near miss with rockets could cause damage or injuries to "soft targets," and patrols by Allied rocket-armed aircraft over Normandy disrupted or even completely paralyzed German road traffic. They also affected morale, because even the prospect of a rocket attack was unnerving. The ultimate development of the cannon-armed light attack aircraft was the small production run in 1944 of the Henschel Hs 129B-3, armed with a modified PAK 40 75 mm anti-tank gun. This weapon, the Bordkanone BK 7,5, was the most powerful forward-firing weapon fitted to a production military aircraft during World War II. The only other aircraft to be factory-equipped with similar guns were the 1,420 maritime strike variants of the North American B-25 MitchellG/H, which mounted either a M4 cannon, or light-weight T13E1 or M5 versions of the same gun. These weapons, however, were hand-loaded, had shorter barrels and/or a lower muzzle velocity than the BK 7,5 and, therefore, poorer armor penetration, accuracy and rate of fire. (Except for versions of the Piaggio P.108 armed with a 102mm anti-ship cannon, The BK 7,5 was unsurpassed as an aircraft-fitted gun until 1971, when the four-engine Lockheed AC-130E Spectre; equipped with a 105 mm M102 howitzer, entered service with the US Air Force.) Post-World War II In the immediate post war era the piston-engined ground-attack aircraft remained useful since all of the early jets lacked endurance due to the fuel consumption rates of the jet engines. The higher powered piston engine types that had been too late for World War II were still capable of holding their own against the jets as they were able to both out accelerate and out maneuver the jets. The Royal Navy Hawker Sea Fury fighters and the U.S. Vought F4U Corsair and Douglas A-1 Skyraider were operated during the Korean War while the latter continued to be used throughout the Vietnam War. Many post-World War II era air forces have been reluctant to adopt fixed-wing jet aircraft developed specifically for ground attack. Although close air support and interdiction remain crucial to the modern battlefield, attack aircraft are less glamorous than fighters, while air force pilots and military planners have a certain well-cultivated contempt for "mud-movers". More practically, the cost of operating a specialized ground-attack aircraft is harder to justify when compared with multirole combat aircraft. Jet attack aircraft were designed and employed during the Cold War era, such as the carrier-based nuclear strike Douglas A-3 Skywarrior and North American A-5 Vigilante, while the Grumman A-6 Intruder, F-105 Thunderchief, F-111, F-117 Nighthawk, LTV A-7 Corsair II, Sukhoi Su-25, A-10 Thunderbolt II, Panavia Tornado, AMX, Dassault Étendard, Super Étendard and others were designed specifically for ground-attack, strike, close support and anti-armor work, with little or no air-to-air capability. Ground attack has increasingly become a task of converted trainers, like the BAE Systems Hawk or Aero L-39 Albatros, and many trainers are built with this task in mind, like the CASA C-101 or the Aermacchi MB-339. Such counter-insurgency aircraft are popular with air forces which cannot afford to purchase more expensive multirole aircraft, or do not wish to risk the few such aircraft they have on light ground attack missions. A proliferation of low intensity conflicts in the post-World War II era has also expanded need for these types of aircraft to conduct counter-insurgency and light ground attack operations. A primary distinction of post-World War II aviation between the U.S. Army and the U.S. Air Force was that latter had generally been allocated all fixed-wing aircraft, while helicopters were under control of the former; this was governed by the 1948 Key West Agreement. The Army, wishing to have its own resources to support its troops in combat and faced with a lack of Air Force enthusiasm for the ground-attack role, developed the dedicated attack helicopter. Recent history On 17 January 1991, Task Force Normandy began its attack on two Iraqi anti-aircraft missile sites. TF Normandy, under the command of LTC Richard A. "Dick" Cody, consisted of nine AH-64 Apaches, one UH-60 Black Hawk and four Air Force MH-53J Pave Low helicopters. The purpose of this mission was to create a safe corridor through the Iraqi air defense system. The attack was a huge success and cleared the way for the beginning of the Allied bombing campaign of Operation Desert Storm. One concern involving the Apache arose when a unit of these helicopters was very slow to deploy during U.S. military involvement in Kosovo. According to the Army Times, the Army is shifting its doctrine to favor ground-attack aircraft over attack helicopters for deep strike attack missions because ground-attack helicopters have proved to be highly vulnerable to small-arms fire; the U.S. Marine Corps has noted similar problems. In the late 1960s the United States Air Force requested a dedicated close air support (CAS) plane that became the Fairchild Republic A-10 Thunderbolt II. The A-10 was originally conceived as an anti-armor weapon (the A-X program requirements specifically called for an aircraft mounting a large rotary cannon to destroy massed Warsaw Pact armored forces) with limited secondary capability in the interdiction and tactical bombing roles. Today it remains the only dedicated fixed-wing ground-attack aircraft in any U.S. military service. Overall U.S. experience in the Gulf War, Kosovo War, Afghanistan War, and Iraq War has resulted in renewed interest in such aircraft. The U.S. Air Force is currently researching a replacement for the A-10 and started the OA-X program to procure a light attack aircraft. The Soviets' similar Sukhoi Su-25 (Frogfoot) found success in the "flying artillery" role with many air forces. The UK has completely retired the BAE Harrier II in 2011, and the Panavia Tornado dedicated attack-reconnaissance aircraft in 2019. It obtained the F-35 in 2018 and it retains its fleet of Eurofighter Typhoon multirole fighters.
Technology
Military aviation
null
181158
https://en.wikipedia.org/wiki/Coma%20Berenices
Coma Berenices
Coma Berenices is an ancient asterism in the northern sky, which has been defined as one of the 88 modern constellations. It is in the direction of the fourth galactic quadrant, between Leo and Boötes, and it is visible in both hemispheres. Its name means "Berenice's Hair" in Latin and refers to Queen Berenice II of Egypt, who sacrificed her long hair as a votive offering. It was introduced to Western astronomy during the third century BC by Conon of Samos and was further corroborated as a constellation by Gerardus Mercator and Tycho Brahe. It is the only modern constellation named after a historic person. The constellation's major stars are Alpha, Beta, and Gamma Comae Berenices. They form a half square, along the diagonal of which run Berenice's imaginary tresses, formed by the Coma Star Cluster. The constellation's brightest star is Beta Comae Berenices, a 4.2-magnitude main sequence star similar to the Sun. Coma Berenices contains the North Galactic Pole and one of the richest-known galaxy clusters, the Coma Cluster, part of the Coma Supercluster. Galaxy Malin 1, in the constellation, is the first-known giant low-surface-brightness galaxy. Supernova SN 1940B was the first scientifically observed (underway) type II supernova. FK Comae Berenices is the prototype of an eponymous class of variable stars. The constellation is the radiant of one meteor shower, Coma Berenicids, which has one of the fastest meteor speeds, up to . History Coma Berenices has been recognized as an asterism since the Hellenistic period (or much earlier, according to some authors), and is the only modern constellation named for an historic figure. It was introduced to Western astronomy during the third century BC by Conon of Samos, the court astronomer of Egyptian ruler Ptolemy III Euergetes, to honour Ptolemy's consort, Berenice II. Berenice vowed to sacrifice her long hair as a votive offering if Ptolemy returned safely from battle during the Third Syrian War. Modern scholars are uncertain if Berenice made the sacrifice before or after Ptolemy's return; it was suggested that it happened after Ptolemy's return (around March–June or May 245 BC), when Conon presented the asterism jointly with scholar and poet Callimachus during a public evening ceremony. In Callimachus' poem, Aetia (composed around that time), Berenice dedicated her tresses "to all the gods". In Poem 66, the Latin translation by the Roman poet Catullus, and in Hyginus' De Astronomica, she dedicated her tresses to Aphrodite and placed them in the temple of Arsinoe II (identified after Berenice's death with Aphrodite) at Zephyrium. According to De astronomica, by the next morning the tresses had disappeared. Conon proposed that Aphrodite had placed the tresses in the sky as an acknowledgement of Berenice's sacrifice. Callimachus called the asterism plokamos Berenikēs or bostrukhon Berenikēs in Greek, translated into Latin as "Coma Berenices" by Catullus. Hipparchus and Geminus also recognized it as a distinct constellation. Eratosthenes called it "Berenice's Hair" and "Ariadne's Hair", considering it part of the constellation Leo. Similarly, Ptolemy did not include it among his 48 constellations in the Almagest; considering it part of Leo and calling it Plokamos. Coma Berenices became popular during the 16th century. In 1515, a set of gores by Johannes Schöner labelled the asterism Trica, "hair". In 1536 it appeared on a celestial globe by Caspar Vopel, who is credited with the asterism's designation as a constellation. That year, it also appeared on a celestial map by Petrus Apianus as "Crines Berenices". In 1551, Coma Berenices appeared on a celestial globe by Gerardus Mercator with five Latin and Greek names: Cincinnus, caesaries, πλόκαμος, Berenicis crinis and Trica. Mercator's reputation as a cartographer ensured the constellation's inclusion on Dutch sky globes beginning in 1589. Tycho Brahe, also credited with Coma's designation as a constellation, included it in his 1602 star catalogue. Brahe recorded fourteen stars in the constellation; Johannes Hevelius increased its number to twenty-one, and John Flamsteed to forty-three. Coma Berenices also appeared in Johann Bayer's 1603 Uranometria, and a few other 17th-century celestial maps followed suit. Coma Berenices and the now-obsolete Antinous are considered the first post-Ptolemaic constellations depicted on a celestial globe. With Antinous, Coma Berenices exemplified a trend in astronomy in which globe- and map-makers continued to rely on the ancients for data. This trend ended at the turn of the 16th century with observations of the southern sky and the work of Tycho Brahe. Before the 18th century Coma Berenices was known in English by several names, including "Berenice's Bush" and "Berenice's periwig". The earliest-known English name, "Berenices haire", dates to 1601. By 1702 the constellation was known as Coma Berenices, and appears as such in the 1731 Universal Etymological English Dictionary. Non-Western astronomy Coma Berenices was known to the Akkadians as Ḫegala. In Babylonian astronomy a star, known as ḪÉ.GÁL-a-a (translated as "which is before it") or MÚL.ḪÉ.GÁL-a-a, is tentatively considered part of Coma Berenices. It was also argued that Coma Berenices appears in Egyptian Ramesside star clocks as sb3w ꜥš3w, meaning "many stars". In Arabic astronomy Coma Berenices was known as Al-Dafira الضفيرة ("braid"), Al-Hulba الهلبة and Al-Thu'aba الذؤابة (both meaning "tuft"), the latter two are translations of the Ptolemaic Plokamos, forming the tuft of the constellation Leo and including most of the Flamsteed-designated stars (particularly 12, 13, 14, 16, 17, 18 and 21 Comae Berenices). Al-Sufi included it in Leo. Ulugh Beg, however, regarded Al-Dafira as consisting of two stars, 7 and 23 Comae Berenices. The North American Pawnee people depicted Coma Berenices as ten faint stars on a tanned elk-skin star map dated to at least the 17th century. In the South American Kalina mythology, the constellation was known as ombatapo (face). The constellation was also recognized by several Polynesian peoples. The people of Tonga had four names for Coma Berenices: Fatana-lua, Fata-olunga, Fata-lalo and Kapakau-o-Tafahi. The Boorong people called the constellation Tourt-chinboiong-gherra, and saw it as a small flock of birds drinking rainwater from a puddle in the crotch of a tree. The people of the Pukapuka atoll may have called it Te Yiku-o-te-kiole, although sometimes this name is associated with Ursa Major. Characteristics Coma Berenices is bordered by Boötes to the east, Canes Venatici to the north, Leo to the west and Virgo to the south. Covering 386.5 square degrees and 0.937% of the night sky, it ranks 42nd of the 88 constellations by area. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Com". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 12 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , and the declination coordinates are between +13.30° and +33.31°. Coma Berenices is wholly visible to observers north of latitude 56°S. and the constellation's midnight culmination occurs on 2 April. Features Although it is not large, Coma Berenices contains one galactic supercluster, two galactic clusters, one star cluster and eight Messier objects (including several globular clusters). These objects can be seen with minimal obscuration by dust because the constellation is not in the direction of the galactic plane. Because of that, there are few open clusters (except for the Coma Berenices Cluster, which dominates the northern part of the constellation), diffuse nebulae or planetary nebulae. Coma Berenices contains the North Galactic Pole at right ascension and declination (epoch J2000.0). Stars Brightest stars Coma Berenices is not particularly bright, as none of its stars are brighter than fourth magnitude, although there are 66 stars brighter than or equal to apparent magnitude 6.5. The constellation's brightest star is Beta Comae Berenices (43 Comae Berenices in Flamsteed designation, occasionally known as Al-Dafira), at magnitude 4.2 and with a high proper motion. In Coma Berenices' northeastern region, it is 29.95 ± 0.10 light-years from Earth. A solar analog, it is a yellow-hued F-type main-sequence star with a spectral class of F9.5V B. Beta Comae Berenices is around 36% brighter, and 15% more massive than the Sun, and with a radius 10% larger. The second-brightest star in Coma Berenices is the 4.3-magnitude, bluish Alpha Comae Berenices (42 Comae Berenices), with the proper name Diadem, in the southeastern part of the constellation. Despite its Alpha Bayer designation, the star is dimmer than Beta Comae Berenices, being one of the cases where designation does not correspond to the brightest star. It is a double star, with the spectral classes of F5V and F6V. The star system is 58.1 ± 0.9 light-years from Earth. Gamma Comae Berenices (15 Comae Berenices) is an orange-hued giant star with a magnitude of 4.4 and a spectral class of K1III C. In the southwestern part of the constellation, it is 169 ± 2 light-years from Earth, Estimated to be around 1.79 times as massive as the Sun, it has expanded to around 10 times its radius. It is the brightest star in the Coma Star Cluster. With Alpha Comae Berenices and Beta Comae Berenices, Gamma Comae Berenices forms a 45-degree isosceles triangle from which Berenice's imaginary tresses hang. Star systems The star systems of Coma Berenices include binary, double and triple stars. 21 Comae Berenices (proper name Kissin) is a close binary with nearly equal components and an orbital period of 26 years. The system is 272 ± 3 light-years away. The Coma Cluster contains at least eight spectroscopic binaries, and the constellation has seven eclipsing binaries: CC, DD, EK, RW, RZ, SS and UX Comae Berenices. There are over thirty double stars in Coma Berenices, including 24 Comae Berenices with contrasting colors. Its primary is an orange-hued giant star with a magnitude of 5.0, 610 light-years from Earth, and its secondary is a blue-white-hued star with a magnitude of 6.6. Triple stars include 12 Comae Berenices, 17 Comae Berenices, KR Comae Berenices and Struve 1639. Variable stars Over 200 variable stars are known in Coma Berenices, although many are obscure. Alpha Comae Berenices is a possible Algol variable. FK Comae Berenices, which varies from magnitude 8.14 to 8.33 over a period of 2.4 days, is the prototype for the FK Comae Berenices class of variable stars and the star in which the "flip-flop phenomenon" was discovered. FS Comae Berenices is a semi-regular variable, a red giant with a period of about two months whose magnitude varies between 6.1 and 5.3. R Comae Berenices is a Mira variable with a maximum magnitude of almost 7. There are 123 RR Lyrae variables in the constellation, with many in the M53 cluster. One of these stars, TU Comae Berenices, may have a binary system. The M100 galaxy contains about twenty Cepheid variables, which were observed by the Hubble Space Telescope. Coma Berenices also contains Alpha2 Canum Venaticorum variables, such as 13 Comae Berenices and AI Comae Berenices. In 2019 scientists at Aryabhatta Research Institute of Observational Sciences announced the discovery of 28 new variable stars in Coma Berenices' globular cluster NGC 4147. Supernovae A number of supernovae have been discovered in Coma Berenices. Four (SN 1940B, SN 1969H, SN 1987E and SN 1999gs) were in the NGC 4725 galaxy, and another four were discovered in the M99 galaxy (NGC 4254): SN 1967H, SN 1972Q, SN 1986I and SN 2014L. Five were discovered in the M100 galaxy (NGC 4321): SN 1901B, SN 1914A, SN 1959E, SN 1979C and SN 2006X. SN 1940B, discovered on 5 May 1940, was the first observed type II supernova. SN 2005ap, discovered on 3 March 2005, is the second-brightest-known supernova to date with a peak absolute magnitude of about −22.7. Due to its great distance from Earth (4.7 billion light-years), it was not visible to the naked eye and was discovered telescopically. SN 1979C, discovered in 1979, retained its original X-ray brightness for 25 years despite fading in visible light. Other stars Coma Berenices also contains the neutron star RBS 1223 and the pulsar PSR B1237+25. RBS 1223 is a member of the Magnificent Seven, a group of young neutron stars. In 1975, the first extra-solar source of extreme ultraviolet, the white dwarf HZ 43, was discovered in Coma Berenices. In 1995, there was a very rare outburst of the WZ Sagittae-type dwarf nova AL Comae Berenices. A June 2003 outburst from GO Comae Berenices, an SU Ursae Majoris-type dwarf nova, was photometrically observed. Exoplanets Coma Berenices has seven known exoplanets. One, HD 108874 b, has Earth-like insolation. WASP-56 is a sun-like star of spectral type G6 and apparent magnitude 11.48 with a planet 0.6 the mass of Jupiter that has a period of 4.6 days. Star clusters Coma Star Cluster The Coma Star Cluster represents Berenice's sacrificed tresses and as a naked eye object has been known since antiquity, appearing in Ptolemy's Almagest. It doesn't have a Messier or NGC designation, but is in the Melotte catalogue of open clusters (designated Melotte 111) and is also catalogued as Collinder 256. It is a large, diffuse open cluster of about 50 stars ranging between magnitudes five and ten, including several of Coma Berenices' stars which are visible to the naked eye. The cluster is spread over a huge region (more than five degrees across) near Gamma Comae Berenices. It has such a large apparent size because it is relatively close, only 280 light-years or 86 parsecs away. Globular clusters M53 (NGC 5024) is a globular cluster which was discovered independently by Johann Elert Bode in 1775 and Charles Messier in February 1777; William Herschel was the first to resolve it into stars. The magnitude-7.7 cluster is 56,000 light-years from Earth. Only 1° away is NGC 5053, a globular cluster with a sparser nucleus of stars. Its total luminosity is the equivalent of about 16,000 suns, one of the lowest luminosities of any globular cluster. It was discovered by William Herschel in 1784. NGC 4147 is a somewhat dimmer globular cluster, with a much-smaller apparent size and an apparent magnitude of 10.7. Galaxies Coma Supercluster The Coma Supercluster, itself part of the Coma Filament, contains the Coma and Leo Cluster of galaxies. The Coma Cluster (Abell 1656) is 230 to 300 million light-years away. It is one of the largest-known clusters, with at least 10,000 galaxies (mainly elliptical, with a few spiral galaxies). Due to its distance from Earth, most of the galaxies are visible only through large telescopes. Its brightest members are NGC 4874 and NGC 4889, both with a magnitude of 13; most others are magnitude 15 or dimmer. NGC 4889 is a giant elliptical galaxy with one of the largest-known black holes (21 billion solar masses), and NGC 4921 is the cluster's brightest spiral galaxy. After observing the Coma Cluster, astronomer Fritz Zwicky first postulated the existence of dark matter during the 1930s. The massive galaxy Dragonfly 44 discovered in 2015 was found to consist almost entirely of dark matter. Its mass is very similar to that of the Milky Way, but it emits only 1% of the light emitted by the Milky Way. NGC 4676, sometimes called the Mice Galaxies, is a pair of interacting galaxies 300 million light-years from Earth. Its progenitor galaxies were spiral, and astronomers estimate that they had their closest approach about 160 million years ago. That approach triggered large regions of star formation in both galaxies, with long "tails" of dust, stars and gas. The two progenitor galaxies are predicted to interact significantly at least one more time before they merge into a larger, probably-elliptical galaxy. Virgo Cluster Coma Berenices contains the northern portion of the Virgo Cluster (also known as the Coma–Virgo Cluster), about 60 million light-years away. The portion includes six Messier galaxies. M85 (NGC 4382), considered elliptical or lenticular, is one of the cluster's brighter members at magnitude nine. M85 is interacting with the spiral galaxy NGC 4394 and the elliptical galaxy MCG-3-32-38. However, it is relatively isolated from the rest of the cluster. M88 (NGC 4501) is a multi-arm spiral galaxy seen at about 30° from edge-on. It has a highly-regular shape with well-developed, symmetrical arms. Among the first galaxies recognized as spiral, it has a supermassive black hole in its center. M91 (NGC 4548), a barred spiral galaxy with a bright, diffuse nucleus, is the faintest object in Messier's catalog at magnitude 10.2. M98 (NGC 4192), a bright, elongated spiral galaxy seen nearly edge-on, appears elliptical because of its unusual angle. The magnitude-10 galaxy has no redshift. M99 (NGC 4254) is a spiral galaxy seen face-on. Like M98 it is of magnitude-10 and has an unusually long arm on its west side. Four supernovae have been observed in the galaxy. M100 (NGC 4321), a magnitude-nine spiral galaxy seen face-on, is one of the cluster's brightest. Photographs reveal a brilliant core, two prominent spiral arms, an array of secondary arms and several dust lanes. Other galaxies M64 (NGC 4826) is known as the Black Eye Galaxy because of the prominent dark dust lane in front of the galaxy's bright nucleus. Also known as the Sleeping Beauty and Evil Eye galaxy, it is about 17.3 million light-years away. Recent studies indicate that the interstellar gas in the galaxy's outer regions rotates in the opposite direction from that in the inner regions, leading astronomers to believe that at least one satellite galaxy collided with it less than a billion years ago. All other evidence of the smaller galaxy has been assimilated. At the interface between the clockwise- and counterclockwise-rotating regions are many new nebulae and young stars. NGC 4314 is a face-on barred spiral galaxy at a distance of 40 million light-years. It is unique for its region of intense star formation, creating a ring around its nucleus which was discovered by the Hubble Space Telescope. The galaxy's prodigious star formation began five million years ago, in a region with a diameter of 1,000 light-years. The core's structure is also unique because the galaxy has spiral arms which feed gas into the bar. NGC 4414 is an unbarred spiral flocculent galaxy about 62 million light-years away. It is one of the closest flocculent spiral galaxies. NGC 4565 is an edge-on spiral galaxy which appears superimposed on the Virgo Cluster. NGC 4565 has been nicknamed the Needle Galaxy because when seen in full, it appears as a narrow streak of light. Like many edge-on spiral galaxies, it has a prominent dust lane and a central bulge. NGC 4565 has at least two satellite galaxies, and one of them is interacting with it. NGC 4651, about the size of the Milky Way, has tidal stellar streams gravitationally stripped from a smaller, satellite galaxy. It is about 62 million light-years away. It is located on the outskirts of the cluster, and is also known as the Umbrella Galaxy. Unlike the other spiral galaxies in the cluster, NGC 4651 is rich in neutral hydrogen, which also extends beyond the optical disk. Its star formation is typical for a galaxy of its type. Spiral galaxy Malin 1 discovered in 1986 is the first-known giant low-surface-brightness galaxy. With UGC 1382, it is also one of the largest low-surface-brightness galaxies. In 2006 a dwarf galaxy, also named Coma Berenices, was discovered in the constellation from data obtained by the Sloan Digital Sky Survey. The galaxy is a faint satellite of the Milky Way. It is one of the faintest satellites of the Milky Way - its integrated luminosity is about times that of the Sun (absolute visible magnitude of about −4.1), which is lower than many globular clusters. A high mass to light ratio may mean that the satellite has large amounts of dark matter. Quasars HS 1216+5032 is a bright, gravitationally lensed pair of quasars. W Comae Berenices (or ON 231), a blazar in the constellation's northwest, was originally designated a variable star and later found to be a BL Lacertae object. As of 2009, it had the most intense gamma ray spectrum of the sixty known gamma-ray blazars. Gamma-ray bursts Some gamma-ray bursts occurred in Coma Berenices, particularly GRB 050509B on 9 May 2005 and GRB 080607 on 7 June 2008. GRB 050509B, which lasted only 0.03 second, became the first short burst with a detected afterglow. Meteor shower The Coma Berenicids meteor shower peaks around 18 January. Despite the shower's low intensity (averaging one or two meteors per hour) its meteors are some of the fastest, with speeds up to . In culture Since Callimachus' poem, Coma Berenices has been occasionally featured in culture. Alexander Pope alludes to the legend in the ending of The Rape of the Lock, in which the titular hair is placed among the stars. (The poem would go on to provide the names of some of the moons of Uranus.) In 1886, Spanish artist Luis Ricardo Falero created a mezzotint print personifying Coma Berenices alongside Virgo and Leo. In 1892, the Russian poet Afanasy Fet made the constellation the subject of his short poem, composed for the Countess Natalya Sollogub. The Swedish poet Gunnar Ekelöf wrote the lines "Your friend the comet combed his hair with the Leonids / Berenice let her hair hang down from the sky" in a 1933 poem. American writer and folksinger Richard Fariña mentions Coma Berenices in his 1966 novel Been Down So Long It Looks Like Up To Me, sardonically writing about content typical to upper-level astronomy coursework at Cornell: "It's the advanced courses give you trouble. Relativity principles, spiral nebula in Coma Berenices, that kind of hassle". The Bolivian poet, Pedro Shimose, makes Coma Berenices the home address of his "Señorita NGC 4565" in his poem "Carta a una estrella que vive en otra constelación" ("Letter to a star who lives in another constellation"), included in his 1967 collection, "Sardonia". " The Irish poet W. B. Yeats, in his poem "Her Dream", refers to "Berenice's burning hair" being "nailed upon the night". Francisco Guerrero, a 20th-century Spanish composer, wrote an orchestral work on the constellation in 1996. In 1999 Irish artist Alice Maher made a series of four oversize drawings, entitled Coma Berenices, of entwining black hair coils.
Physical sciences
Other
Astronomy
181160
https://en.wikipedia.org/wiki/Nimitz-class%20aircraft%20carrier
Nimitz-class aircraft carrier
The Nimitz class is a class of ten nuclear-powered aircraft carriers in service with the United States Navy. The lead ship of the class is named after World War II United States Pacific Fleet commander Fleet Admiral Chester W. Nimitz, who was the last living U.S. Navy officer to hold the rank. With an overall length of and a full-load displacement of over , the Nimitz-class ships were the largest warships built and in service until entered the fleet in 2017. Instead of the gas turbines or diesel–electric systems used for propulsion on many modern warships, the carriers use two A4W pressurized water reactors. The reactors produce steam to drive steam turbines which drive four propeller shafts and can produce a maximum speed of over and a maximum power of around . As a result of nuclear power, the ships are capable of operating for over 20 years without refueling and are predicted to have a service life of over 50 years. They are categorized as nuclear-powered aircraft carriers and are numbered with consecutive hull numbers from CVN-68 to CVN-77. All ten carriers were constructed by Newport News Shipbuilding Company in Virginia. , the lead ship of the class, was commissioned on 3 May 1975, and , the tenth and last of the class, was commissioned on 10 January 2009. Since the 1970s, Nimitz-class carriers have participated in many conflicts and operations across the world, including Operation Eagle Claw in Iran, the Gulf War, and more recently in Iraq and Afghanistan. The angled flight decks of the carriers use a CATOBAR arrangement to operate aircraft, with steam catapults and arrestor wires for launch and recovery. As well as speeding up flight deck operations, this allows for a much wider variety of aircraft than with the STOVL arrangement used on smaller carriers. An embarked carrier air wing comprising around 64 aircraft is normally deployed on board. The air wings' strike fighters are primarily F/A-18E and F/A-18F Super Hornets. In addition to their aircraft, the vessels carry short-range defensive weaponry for anti-aircraft warfare and missile defense. The unit cost was about US$8.5 billion in FY 2012 dollars, equal to US$ billion in . Description The Nimitz-class aircraft carriers have a length of overall and at the waterline, with a beam of overall and at the waterline; the individual ships have slight variations in their dimensions. They were initially designed with a full-load displacement of and a draft of , but the ships would be delivered several thousand tons heavier, particularly for later members of the class. As the vessels were overhauled and installed more equipment, loaded displacement would climb to exceed . For example, currently displaces at full load. The ships' nominal complement comprises: 3,000–3,200; 1,500 (air wing); and 500 (other). Design The Nimitz-class aircraft carriers were ordered to supplement the aircraft carriers of the and es, maintaining the strength and capability of the U.S. Navy after the older carriers were decommissioned. The ships were designed to be improvements on previous U.S. aircraft carriers, particularly the Enterprise and supercarriers, although the arrangement of the vessels is relatively similar to that of the Kitty Hawk class. Among other design improvements, the two reactors on Nimitz-class carriers take up less space than the eight reactors used on Enterprise. Along with a more generally improved design, Nimitz-class carriers can carry 90% more aviation fuel and 50% more ordnance when compared to the Forrestal class. The U.S. Navy has stated that the carriers could withstand three times the damage sustained by the inflicted by Japanese air attacks during World War II. The hangars on the ships are divided into three fire bays by thick steel doors that are designed to restrict the spread of fire. This addition has been present on U.S. aircraft carriers since World War II, after the fires caused by kamikaze attacks. The first ships were designed around the time of the Vietnam War, and certain aspects of the design were influenced by operations there. To a certain extent, the carrier operations in Vietnam demonstrated the need for increased capabilities of aircraft carriers over their survivability; they were used to send sorties into the war and were, therefore, less subject to attack. As a result of this experience, Nimitz-class carriers were designed with larger stores of aviation fuel and larger magazines compared to previous carriers, although this was partly a result of increased space available by the new design of the ships' propulsion systems. A major purpose of the carriers was initially to support the U.S. military during the Cold War. They were designed with capabilities for that role, including using nuclear power instead of oil for greater endurance and the ability to adjust their weapons systems on the basis of new intelligence and technological developments. They were initially categorized only as attack carriers, but ships have been constructed with anti-submarine capabilities since . As a result, the ships and their aircraft can participate in a wide range of operations, including sea and air blockades; mine laying; and missile strikes on land, air, and sea. Because of a design flaw, ships of this class have inherent lists to starboard when under combat loads that exceed the capability of their list control systems. The problem appears to be especially prevalent on some of the more modern vessels. This problem has been previously rectified by using damage control voids for ballast, but a solution using solid ballast that does not affect the ship's survivability has been proposed. Construction All ten Nimitz-class carriers were constructed between 1968 and 2006 at Newport News Shipbuilding in Newport News, Virginia. The first three units of the class were erected in Dry Dock 11, the other seven ships were constructed in the largest dry dock in the western hemisphere, Dry Dock 12, now long after a recent expansion. Beginning with , the aircraft carriers were manufactured with modular construction. This means that whole sections could be welded together with plumbing and electrical equipment already fitted, improving efficiency. The modules were lifted into the dry dock using gantry cranes and welded. In the case of the bow sections, these can weigh over . This method was originally developed by Ingalls Shipbuilding and increases the rate of work because much of the fitting out does not have to be carried out within the confines of the already-finished hull. The total cost of construction for each ship was around $4.5 billion. Propulsion All ships of the class are powered by two A4W nuclear reactors, housed in separate compartments. The reactors produce heat through nuclear fission, which heats water to produce steam. This is then passed through four turbines, which are shared by the two reactors. A gearbox transmits power to four propeller shafts, producing a maximum speed of over and maximum power of . The turbines power the four bronze propellers, each with a diameter of and a weight of . Behind these are the two rudders, which are high and long, and each weighs . The Nimitz-class ships constructed since also have bulbous bows to improve speed and fuel efficiency by reducing wave-making resistance. As a result of nuclear power, the ships are capable of operating continuously for over 20 years without refueling and are predicted to have a service life of over 50 years. Armament and protection In addition to the aircraft carried on board, the ships carry defensive equipment for use against missiles and hostile aircraft. These consist of either two or three RIM-7 Sea Sparrow or RIM-162 Evolved SeaSparrow Missile Mk 29 missile launchers designed for defense against aircraft and anti-ship missiles, as well as either three or four 20 mm Phalanx CIWS. USS Ronald Reagan has none of these, having been built with the Mk 49 Guided Missile Launching Systems for RIM-116 Rolling Airframe Missiles, two of which have also been installed on and . These will be installed on the other ships as they return for Refueling Complex Overhaul (RCOH). Since USS Theodore Roosevelt, the carriers have been constructed with Kevlar armor over vital spaces, and earlier ships have been retrofitted with it: Nimitz in 1983–1984, Dwight D. Eisenhower from 1985 to 1987 and Carl Vinson in 1989. The ships' other countermeasures are four Sippican SRBOC (super rapid bloom off-board chaff) six-barrel Mk 36 decoy launchers, which deploy infrared flares and chaff to disrupt the sensors of incoming missiles; an SSTDS torpedo defense system; and an AN/SLQ-25 Nixie torpedo countermeasures system. The carriers also use AN/SLQ-32(V) jamming systems to detect and disrupt hostile radar signals in addition to the electronic warfare capabilities of some of the aircraft on board. The presence of nuclear weapons on board U.S. aircraft carriers since the end of the Cold War has neither been confirmed nor denied by the U.S. government. As a result, the presence of a U.S. aircraft carrier in a foreign port has occasionally provoked protest from local people, for example, when Nimitz visited Chennai, India, in 2007. At that time, the Strike Group commander Rear Admiral John Terence Blake stated, "The U.S. policy [...] is that we do not routinely deploy nuclear weapons on board Nimitz." In May 2013, George H.W. Bush conducted the first carrier-borne end-to-end at-sea test of the Surface Ship Torpedo Defense System (SSTDS). The SSTDS combined the passive detection of the Torpedo Warning System (TWS) that finds, classifies, and tracks torpedoes with the hard-kill capability of a Countermeasure Anti-Torpedo (CAT), an encapsulated miniature torpedo designed to locate, home in on, and destroy hostile torpedoes. This was to increase protection against wake-homing torpedoes like the Type 53 that do not respond to acoustic decoys. The pieces of the SSTDS were engineered to locate and destroy incoming torpedoes in a matter of seconds; each system included one TWS and 8 CATs. Initial operational capability (IOC) was planned for 2019, and all aircraft carriers were to be outfitted by 2035. The Navy suspended work on the project in September 2018 due to poor reliability of the components; hardware, already installed on five carriers, is to be removed by 2023. Carrier air wing In order for a carrier to deploy, it must embark one of ten Carrier Air Wings (CVW). The carriers can accommodate a maximum of 130 F/A-18 Hornets or 85–90 aircraft of different types, but current numbers are typically 64 aircraft. Although the air wings are integrated with the operation of the carriers they are deployed to, they are regarded as separate entities. As well as the aircrew, the air wings are also made up of support personnel involved in roles including maintenance, aircraft and ordnance handling, and emergency procedures. Each person on the flight deck wears color-coded clothing to make their role easily identifiable. A typical carrier air wing can include 24–36 F/A-18E or F Super Hornets as strike fighters; two squadrons of 10–12 F/A-18C Hornets, with one of these often provided by the U.S. Marine Corps (VMFA), also as strike fighters; 4–6 EA-18G Growlers for electronic warfare; 4–6 E-2C or D Hawkeyes for airborne early warning (AEW), C-2 Greyhounds used for logistics (to be replaced by MV-22 Ospreys); and a Helicopter Anti-Submarine Squadron of 6–8 SH-60F and HH-60H Seahawks. Aircraft previously operated from Nimitz-class carriers include F-4 Phantoms, RA-5C Vigilantes, RF-8G Crusaders, F-14 Tomcats, S-3 Vikings, EA-3B Skywarriors, EA-6B Prowlers, A-7 Corsair II, and A-6E Intruder aircraft. Flight deck and aircraft facilities The flight deck is angled at nine degrees, which allows for aircraft to be launched and recovered simultaneously. This angle of the flight deck was reduced slightly compared to previous carriers, as the current design improves the airflow around the carrier. Four steam catapults are used to launch fixed-wing aircraft, and four arrestor wires are used for recovery. The two newest carriers, Ronald Reagan and George H.W. Bush, have only three arrestor wires each, as the fourth was used infrequently on earlier ships and was therefore deemed unnecessary. This CATOBAR arrangement allows for faster launching and recovery as well as a much wider range of aircraft that can be used on board compared with smaller aircraft carriers, most of which use a simpler STOVL arrangement without catapults or arrestor wires. The ship's aircraft operations are controlled by the air boss from Primary Flight Control or Pri-Fly. Four large elevators transport aircraft between the flight deck and the hangars below. These hangars are divided into three bays by thick steel doors that are designed to restrict the spread of fire. Strike groups When an aircraft carrier deploys, it takes a Carrier Strike Group (CSG), made up of several other warships and supply vessels that allow the operation to be carried out. The armament of the Nimitz class is made up only of short-range defensive weapons, used as a last line of defense against enemy missiles and aircraft. As with all surface ships, an aircraft carrier is particularly vulnerable to attack from below, specifically from submarines. An aircraft carrier is a very expensive, hard to replace, and strategically valuable asset, and therefore it logically has immense value as a target. As a result of its target value and vulnerability, aircraft carriers are always escorted by at least one submarine for protection. The other vessels in the Strike Group provide additional capabilities, such as long-range Tomahawk missiles or the Aegis Combat System, and protect the carrier from attack. A typical Strike Group may include, in addition to an aircraft carrier: up to six surface combatants, including guided-missile cruisers and guided-missile destroyers, used primarily for anti-aircraft warfare and anti-submarine warfare, and frigates/guided-missile frigates, prior to their retirement from USN service. When the Navy commissions a new class of frigates (FFG(X)), they will again accompany CSGs. Also making up part of the group is one or two attack submarines for seeking out and destroying hostile surface ships and submarines and an ammunition, oiler, and supply ship from Military Sealift Command to provide logistical support. The numbers and types of vessels that make up each strike group can vary from group to group, depending on deployments, mission, and availability. Design differences within the class While the designs of the last seven ships, beginning with Theodore Roosevelt, differ slightly from those of the earlier ships, the U.S. Navy considers all ten carriers a single class. When the older carriers come in for Refueling and Complex Overhaul (RCOH), their nuclear power plants are refueled, and they are upgraded to the standards of the later carriers. Other modifications may be performed to update the ships' equipment. The ships were initially classified only as attack carriers but have been constructed with anti-submarine capabilities since Carl Vinson. These improvements include more advanced radar systems and facilities enabling the ships to operate aircraft in a more effective anti-submarine warfare role, including fitting common undersea picture (CUP) technology, which uses sonar to allow for better assessment of the threat from submarines. Theodore Roosevelt and later carriers have slight structural differences from the earlier Nimitz carriers, such as improved protection for ordnance stored in their magazines. Other improvements include upgraded flight deck ballistic protection, first installed on George Washington, and the high-strength low-alloy steel (HSLA-100) used for constructing ships starting with John C. Stennis. More recently, older ships have had their flight decks upgraded with a new non-slip material fitted on new-build ships to improve safety for crew members and aircraft. The last carrier of the class, George H.W. Bush, was designed as a "transition ship" from the Nimitz class to the replacement . George H.W. Bush incorporates new technologies, including improved propeller and bulbous bow designs, a reduced radar cross-section, and electronic and environmental upgrades. The ship's cost was $6.2 billion. The earlier Nimitz-class ships each cost around $4.5 billion. To lower costs, some new technologies and design features were also incorporated into USS Ronald Reagan, the previous carrier, including a redesigned island. Ships in class The United States Navy lists the following ten ships in the Nimitz class: Service history 1975–1989 One of the first major operations in which the ships were involved was Operation Eagle Claw launched by Nimitz in 1980 after she had deployed to the Indian Ocean in response to the taking of hostages in the U.S. embassy in Tehran. Although initially part of the U.S. Atlantic Fleet, Dwight D. Eisenhower relieved Nimitz in this operation after her service in the Mediterranean Sea. Nimitz conducted a Freedom of Navigation exercise alongside the aircraft carrier in August 1981 in the Gulf of Sidra, near Libya. During this exercise, two of the ship's F-14 Tomcats shot down two Libyan aircraft in what became known as the Gulf of Sidra incident. In 1987, Carl Vinson participated in the first U.S. carrier deployment in the Bering Sea, and Nimitz provided security during the 1988 Olympic Games in Seoul. 1990–2000 The two most significant deployments the Nimitz class was involved in during the 1990s were the Gulf War and its aftermath and Operation Southern Watch in southern Iraq. All active vessels were engaged in both of these to some extent, with Operation Southern Watch continuing until 2003. Most carriers in operation in Operation Desert Shield and Operation Desert Storm played supporting roles, with only Theodore Roosevelt playing an active part in combat operations. Throughout the 1990s and more recently, Nimitz-class carriers have been deployed as part of humanitarian missions. While deployed in the Gulf War, Abraham Lincoln was diverted to the Pacific Ocean to participate alongside 22 other ships in Operation Fiery Vigil, evacuating civilians following the eruption of Mount Pinatubo on Luzon Island in the Philippines. In October 1993, Abraham Lincoln deployed to Somalia to assist UN humanitarian operations there, spending four weeks flying patrols around Mogadishu while supporting U.S. troops during Operation Restore Hope. The same ship also participated in Operation Vigilant Sentinel in the Persian Gulf in 1995. Theodore Roosevelt flew patrols in support of the Kurds over northern Iraq as part of Operation Provide Comfort in 1991. In 1996, George Washington played a peacekeeping role in Operation Decisive Endeavor in Bosnia and Herzegovina. In 1999, Theodore Roosevelt was called to the Ionian Sea to support Operation Allied Force alongside other NATO militaries. 2001–present Harry S. Trumans maiden deployment was in November 2000. The carrier's air wing flew 869 combat sorties in support of Operation Southern Watch, including a strike on Iraqi air defense sites on 16 February 2001, in response to Iraqi surface-to-air missile fire against United Nations coalition forces. After the September 11 attacks, Carl Vinson and Theodore Roosevelt were among the first warships to participate in Operation Enduring Freedom in Afghanistan. Carl Vinson sailed towards the Persian Gulf intending to support Operation Southern Watch in July 2001. This changed in response to the attacks, and the ship changed course to travel towards the North Arabian Sea, where she launched the first airstrikes in support of the operation on 7 October 2001. Following the attacks, John C. Stennis and George Washington participated in Operation Noble Eagle, carrying out homeland security operations off the West Coast of the United States. All active ships have been involved in Iraq and Afghanistan since that time. This included the invasion in 2003, as well as providing subsequent support for Operation Iraqi Freedom since then. The carriers have also provided aid after natural disasters. In 2005, Abraham Lincoln supported Operation Unified Assistance in Indonesia after the December 2004 tsunami, and Harry S. Truman provided aid after Hurricane Katrina later in 2005. The Ronald Reagan Carrier Strike Group performed humanitarian assistance and disaster relief operations in the Philippines in June 2008 after Typhoon Fengshen, which killed hundreds from the central island regions and the main island of Luzon. In January 2010, Carl Vinson operated off Haiti, providing aid and drinking water to earthquake survivors as part of the U.S.-led Operation Unified Response, alongside other major warships and hospital ship . Refueling Complex Overhaul In order to refuel their nuclear power plants, the carriers each undergo a Refueling and Overhaul (RCOH) once in their service lives. This is also the most substantial overhaul the ships undergo while in service and involves bringing the vessels' equipment up to the standards of the newest ships. The ship is placed in a dry dock, and essential maintenance is carried out, including painting the hull below the waterline and replacing electrical and mechanical components such as valves. Because of the large time periods between the ships' constructions, the armament and designs of the newer ships are more modern than those of the older ships. In RCOH, the older ships are refitted to the standards of the newer ships, which can include upgrades to the flight deck, aircraft catapults, combat systems, and radar systems; precise details can vary significantly between the ships. The improvements normally take around four years to complete. The RCOH for USS Theodore Roosevelt took four years to complete (2009–2013) and cost about $2.6 billion. Planned Incremental Availability is a similar procedure, although it is less substantial and does not involve refueling the nuclear power plants. Symbolic and diplomatic roles Because of their status as the largest warships in the U.S. Navy, the deployment of an aircraft carrier can fulfill a symbolic role, not just as a deterrent to an enemy but often as a diplomatic tool in strengthening relations with allies and potential allies. The latter of these functions can occur either as a single visit to a country, in which senior naval officers are allowed to observe the operation of the carrier and interact with its senior officers, or as part of an international task force. This can be in combat operations, such as the NATO bombing of Yugoslavia in 1999, or training deployments, such as Exercise RIMPAC. In addition, carriers have participated in international Maritime security operations, combating piracy in the Persian Gulf and off the coast of Somalia. Accidents and incidents On 26 May 1981, an EA-6B Prowler crashed on the flight deck of Nimitz, killing 14 crewmen and injuring 45 others. Forensic testing of the personnel involved showed that several tested positive for marijuana. While this was not found to have directly caused the crash, the investigation's findings prompted the introduction of mandatory drug testing of all service personnel. Pilots have been able to eject safely in several cases of ditched aircraft. However, fatal aircraft crashes have occurred; in 1994, Lieutenant Kara Hultgreen, the first female F-14 Tomcat pilot, was killed while attempting to land on board Abraham Lincoln during a training exercise. Fires have also caused damage to the ships; in May 2008, while rotating through to her new homeport at Yokosuka Naval Base in Yokosuka, Japan, George Washington suffered a fire that cost $70 million in repairs, injured 37 sailors and led to the ship undergoing three months of repairs at San Diego; this led to its having to miss the 2008 RIMPAC exercises and delayed the final withdrawal from service of . The fire was caused by unauthorized smoking near improperly stored flammable refrigerant compressor oil. Future and planned replacement Nimitz-class carriers were initially designed to have a 50-year service life. At the end of their service life, ships will be decommissioned. This process will first take place on Nimitz and is estimated to cost from $750 to $900 million. This compares with an estimated $53 million for a conventionally powered carrier. Most of the difference in cost is attributed to the deactivation of the nuclear power plants and the safe removal of radioactive material and other contaminated equipment. A new class of carriers, the Gerald R. Ford class, is being constructed to replace previous vessels after decommissioning. Ten of these are expected, and the first has entered service as of 22 July 2017 to replace . Most of the rest of these new carriers are to replace the oldest Nimitz ships as they reach the end of their service lives. The new carriers will have a similar design to George H.W. Bush (using an almost identical hull shape) and technological and structural improvements. The Navy reported in early 2022 that it was conducting a study to determine if the Nimitz-class carrier lives could be extended to as long as 55 years.
Technology
Naval warfare
null
181169
https://en.wikipedia.org/wiki/Sand%20dollar
Sand dollar
Sand dollars (also known as sea cookies or snapper biscuits in New Zealand and Brazil, or pansy shells in South Africa) are species of flat, burrowing sea urchins belonging to the order Clypeasteroida. Some species within the order, not quite as flat, are known as sea biscuits. Sand dollars can also be called "sand cakes" or "cake urchins". Names The term "sand dollar" derives from the appearance of the tests (skeletons) of dead individuals after being washed ashore. The test lacks its velvet-like skin of spines and has often been bleached white by sunlight. To beachcombers of the past, this suggested a large, silver coin, such as the old Spanish dollar, which had a diameter of 38–40 mm. Other names for the sand dollar include sand cakes, pansy shells, snapper biscuits, cake urchins, and sea cookies. In South Africa, they are known as pansy shells from their suggestion of a five-petaled garden flower. The Caribbean sand dollar or inflated sea biscuit, Clypeaster rosaceus, is thicker in height than most. In Spanish-speaking areas of the Americas, the sand dollar is most often known as (sea cookie); the translated term is often encountered in English. In the folklore of Georgia in the United States, sand dollars were believed to represent coins lost by mermaids. Description Sand dollars diverged from the other irregular echinoids, namely the cassiduloids, during the early Jurassic, with the first true sand dollar genus, Togocyamus, arising during the Paleocene. Soon after Togocyamus, more modern-looking groups emerged during the Eocene. Sand dollars are small in size, averaging from 80 to 100 mm (3 to 4 inches). As with all members of the order Clypeasteroida, they possess a rigid skeleton called a test. The test consists of calcium carbonate plates arranged in a fivefold symmetric pattern. The test of certain species of sand dollar have slits called lunules that can help the animal stay embedded in the sand to stop it from being swept away by an ocean wave. In living individuals, the test is covered by a skin of velvet-textured spines which are covered with very small hairs (cilia). Coordinated movements of the spines enable sand dollars to move across the seabed. The velvety spines of live sand dollars appear in a variety of colors—green, blue, violet, or purple—depending on the species. Individuals which are very recently dead or dying (moribund) are sometimes found on beaches with much of the external morphology still intact. Dead individuals are commonly found with their empty test devoid of all surface material and bleached white by sunlight. The bodies of adult sand dollars, like those of other echinoids, display radial symmetry. The petal-like pattern in sand dollars consists of five paired rows of pores. The pores are perforations in the endoskeleton through which podia for gas exchange project from the body. The mouth of the sand dollar is located on the bottom of its body at the center of the petal-like pattern. Unlike other urchins, the bodies of sand dollars also display secondary front-to-back bilateral symmetry with no morphological distinguishing features between males and females. The anus of sand dollars is located at the back rather than at the top as in most urchins, with many more bilateral features appearing in some species. These result from the adaptation of sand dollars, in the course of their evolution, from creatures that originally lived their lives on top of the seabed (epibenthos) to creatures that burrow beneath it (endobenthos). Suborders and families According to World Register of Marine Species: sub-order Clypeasterina family Clypeasteridae L. Agassiz, 1835 family Fossulasteridae Philip & Foster, 1971 † family Scutellinoididae Irwin, 1995 † family Conoclypidae von Zittel, 1879 † family Faujasiidae Lambert, 1905 † family Oligopygidae Duncan, 1889 † family Plesiolampadidae Lambert, 1905 † sub-order Scutellina infra-order Laganiformes family Echinocyamidae Lambert & Thiéry, 1914 family Fibulariidae Gray, 1855 family Laganidae Desor, 1858 infra-order Scutelliformes family Echinarachniidae Lambert in Lambert & Thiéry, 1914 family Eoscutellidae Durham, 1955 † family Protoscutellidae Durham, 1955 † family Rotulidae Gray, 1855 super-family Scutellidea Gray, 1825 family Abertellidae Durham, 1955 † family Astriclypeidae Stefanini, 1912 family Dendrasteridae Lambert, 1900 -- Pacific eccentric sand dollar. family Mellitidae Stefanini, 1912 -- Keyhole sand dollars family Monophorasteridae Lahille, 1896 † family Scutasteridae Durham, 1955 † family Scutellidae Gray, 1825 family Taiwanasteridae Wang, 1984 family Scutellinidae Pomel, 1888a † Behavior and habitat Sand dollars can be found in temperate and tropical zones along all continents. Sand dollars live in waters below the mean low tide line, on or just beneath the surface of sandy and muddy areas. The common sand dollar, Echinarachnius parma, can be found in the Northern Hemisphere from the intertidal zone to the depths of the ocean, while the keyhole sand dollars (three species of the genus Mellita) can be found on many a wide range of coasts in and around the Caribbean Sea. The spines on the somewhat flattened topside and underside of the animal allow it to burrow or creep through the sediment when looking for shelter or food. Fine, hair-like cilia cover these tiny spines. Sand dollars usually eat algae and organic matter found along the ocean floor, though some species will tip on their side to catch organic matter floating in ocean currents. Sand dollars frequently gather on the ocean floor, in part to their preference for soft bottom areas, which are convenient for their reproduction. The sexes are separate and, as with most echinoids, gametes are released into the water column and go through external fertilization. The nektonic larvae metamorphose through several stages before the skeleton or test begins to form, at which point they become benthic. In 2008, biologists discovered that sand dollar larvae will clone themselves for a few different reasons. When a predator is near, certain species of sand dollar larvae will split themselves in half in a process they use to asexually clone themselves when sensing danger. The cloning process can take up to 24 hours and creates larvae that are 2/3 their original length which can help conceal them from the predator. The larvae of these sand dollars clone themselves when they sense dissolved mucus from a predatory fish. The larvae exposed to this mucus from the predatory fish respond to the threat by cloning themselves. This process doubles their population and halves their size which allows them to better escape detection by the predatory fish but may make them more vulnerable to attacks from smaller predators like crustaceans. Sand dollars will also clone themselves during normal asexual reproduction. Larvae will undergo this process when food is plentiful or temperature conditions are optimal. Cloning may also occur to make use of the tissues that are normally lost during metamorphosis. The flattened test of the sand dollar allows it to burrow into the sand and remain hidden from sight from potential predators. Predators of the sand dollar are the fish species cod, flounder, sheepshead and haddock. These fish will prey on sand dollars even through their tough exterior. Sand dollars have spines on their bodies that help them to move around the ocean floor. When a sand dollar dies, it loses the spines and becomes smooth as the exoskeleton is then exposed.
Biology and health sciences
Echinoderms
Animals
181173
https://en.wikipedia.org/wiki/Airbus%20A380
Airbus A380
The Airbus A380 is a very large wide-body airliner, developed and produced by Airbus. It is the world's largest passenger airliner and the only full-length double-deck jet airliner. Airbus studies started in 1988, and the project was announced in 1990 to challenge the dominance of the Boeing 747 in the long-haul market. The then-designated A3XX project was presented in 1994; Airbus launched the €–billion ($10.7–billion) A380 programme on 19 December 2000. The first prototype was unveiled in Toulouse on 18 January 2005, with its first flight on 27 April 2005. It then obtained its type certificate from the European Aviation Safety Agency (EASA) and the US Federal Aviation Administration (FAA) on 12 December 2006. Due to difficulties with the electrical wiring, the initial production was delayed by two years and the development costs almost doubled. It was first delivered to Singapore Airlines on 15 October 2007 and entered service on 25 October. Production peaked at 30 per year in both 2012 and 2014, with production of the aircraft ending in 2021. The A380's estimated $25 billion development cost was not recouped by the time Airbus ended production. The full-length double-deck aircraft has a typical seating for 525 passengers, with a maximum certified capacity for 853 passengers. The quadjet is powered by Engine Alliance GP7200 or Rolls-Royce Trent 900 turbofans providing a range of . , the global A380 fleet had completed more than 800,000 flights over 7.3 million block hours with no fatalities and no hull losses. , there were 237 aircraft in service with 16 operators worldwide. Development Background In mid-1988, Airbus engineers, led by Jean Roeder, began work in secret on the development of an ultra-high-capacity airliner (UHCA), both to complete its own range of products and to break the dominance that Boeing had enjoyed in this market segment since the early 1970s with its Boeing 747. McDonnell Douglas unsuccessfully offered its double-deck MD-12 concept for sale. Lockheed was exploring the possibility for a Very Large Subsonic Transport. Roeder was given approval for further evaluations of the UHCA after a formal presentation to the President and CEO in June 1990. The megaproject was announced at the 1990 Farnborough Airshow, with the stated goal of 15% lower operating costs than the Boeing 747-400. Airbus organised four teams of designers, one from each of its partners (Aérospatiale, British Aerospace, Deutsche Aerospace AG, CASA) to propose new technologies for its future aircraft designs. The designs were presented in 1992 and the most competitive designs were used. In January 1993, Boeing and several companies in the Airbus consortium started a joint feasibility study of a Very Large Commercial Transport (VLCT), aiming to form a partnership to share the limited market. In June 1994, Airbus announced its plan to develop its own very large airliner, designated as A3XX. Airbus considered several designs, including an unusual side-by-side combination of two fuselages from its A340, the largest Airbus jet at the time. The A3XX was pitted against the VLCT study and Boeing's own New Large Aircraft successor to the 747. In July 1995, the joint study with Boeing was abandoned, as Boeing's interest had declined due to analysis that such a product was unlikely to cover the projected $15 billion development cost. Despite the fact that only two airlines had expressed public interest in purchasing such a plane, Airbus was already pursuing its own large-plane project. Analysts suggested that Boeing would instead pursue stretching its 747 design, and that air travel was already moving away from the hub-and-spoke system that consolidated traffic into large planes, and toward more non-stop routes that could be served by smaller planes. From 1997 to 2000, as the 1997 Asian financial crisis darkened the market outlook, Airbus refined its design, targeting a 15–20% reduction in operating costs over the existing Boeing 747-400. The A3XX design converged on a double-decker layout that provided more passenger volume than a traditional single-deck design. Airbus did so in line with traditional hub-and-spoke theory, as opposed to the point-to-point theory with the Boeing 777, after conducting an extensive market analysis with over 200 focus groups. Although early marketing of the huge cross-section touted the possibility of duty-free shops, restaurant-like dining, gyms, casinos and beauty parlours on board, the realities of airline economics have kept such dreams grounded. On 19 December 2000, the supervisory board of newly restructured Airbus voted to launch a € billion ($10.7 billion) project to build the A3XX, re-designated as A380, with 50 firm orders from six launch customers. The A380 designation was a break from previous Airbus families, which had progressed sequentially from A300 to A340. It was chosen because the number 8 resembles the double-deck cross section, and is a lucky number in many East Asian countries where the aircraft was being marketed. The aircraft configuration was finalised in early 2001, and manufacturing of the first A380 wing-box component started on 23 January 2002. The development cost of the A380 had grown to €11–14 billion when the first aircraft was completed. Total development cost In 2000, the projected development cost was €9.5 billion. In 2004, Airbus estimated that €1.5 billion ($2 billion) would need to be added, totalling the developmental costs to € billion ($ billion). In 2006, Airbus stopped publishing its reported cost after reaching costs of €10.2 billion and then it provisioned another €4.9 billion, after the difficulties in electric cabling and two years delay for an estimated total of €18 billion. In 2014, the aircraft was estimated to have cost $25bn (£16bn, €bn) to develop. In 2015, Airbus said development costs were €15 billion (£11.4 billion, $ billion), though analysts believe the figure is likely to be at least €5bn ($ Bn) more for a € Bn ($ Bn) total. In 2016, The A380 development costs were estimated at $25 billion for 15 years, $25–30 billion, or €25 billion ($28 billion). To start the programme in 2000, the governments of France, Germany and the UK loaned Airbus 3.5 billion euros and refundable advances reached 5.9 billion euros ($7.3 billion). In February 2018, after an Emirates order secured production of the unprofitable programme for ten years, Airbus revised its deal with the three loan-giving governments to save $1.4 billion (17%) and restructured terms to lower the production rate from eight per year in 2019 to six per year. On 15 May 2018, in its EU appeal ruling, a WTO ruling concluded that the A380 received improper subsidies through $9 billion of launch aids, but Airbus acknowledged that the threat posed to Boeing by the A380 is so marginal with 330 orders since its 2000 launch that any U.S. sanctions should be minimal, as previous rulings showed Boeing's exposure could be as little as $377 million. In 2018, unit cost was . In February 2019, the German government disclosed that it was conducting talks with Airbus regarding €600 million in outstanding loans. Following the decision to wind down the A380 programme, Europe argues that the subsidies in effect no longer exist and that no sanctions are warranted. Production Major structural sections of the A380 are built in France, Germany, Spain, and the United Kingdom. Due to the sections' large size, traditional transportation methods proved unfeasible, so they are brought to the Jean-Luc Lagardère Plant assembly hall in Toulouse, France, by specialised road and water transportation, though some parts are moved by the A300-600ST Beluga transport aircraft. A380 components are provided by suppliers from around the world; the four largest contributors, by value, are Rolls-Royce, Safran, United Technologies and General Electric. For the surface movement of large A380 structural components, a complex route known as the Itinéraire à Grand Gabarit was developed. This involved the construction of a fleet of roll-on/roll-off (RORO) ships and barges, the construction of port facilities and the development of new and modified roads to accommodate oversized road convoys. The front and rear fuselage sections are shipped on one of three RORO ships from Hamburg in northern Germany to Saint-Nazaire in France. The ship travels via Mostyn, Wales, where the wings are loaded. The wings are manufactured at Broughton in North Wales, then transported by barge to Mostyn docks for ship transport. In Saint-Nazaire, the ship exchanges the fuselage sections from Hamburg for larger, assembled sections, some of which include the nose. This ship unloads in Bordeaux. It then goes to pick up the belly and tail sections from Construcciones Aeronáuticas SA in Cádiz, Spain, and delivers them to Bordeaux. From there, the A380 parts are transported by barge to Langon, and by oversize road convoys to the assembly hall in Toulouse. To avoid damage from direct handling, parts are secured in custom jigs carried on self-powered wheeled vehicles. After assembly, the aircraft are flown to the Airbus Hamburg-Finkenwerder plant to be furnished and painted. Airbus sized the production facilities and supply chain for a production rate of four A380s per month. Testing In 2005, five A380s were built for testing and demonstration purposes. The first A380, registered F-WWOW, was unveiled in Toulouse 18 January 2005. It first flew on 27 April 2005. This plane, equipped with Rolls-Royce Trent 900 engines, flew from Toulouse–Blagnac Airport with a crew of six headed by chief test pilot Jacques Rosay. Rosay said flying the A380 had been "like handling a bicycle". On 1 December 2005, the A380 achieved its maximum design speed of Mach 0.96, (its design cruise speed is Mach 0.85) in a shallow dive. In 2006, the A380 flew its first high-altitude test at Addis Ababa Bole International Airport. It conducted its second high-altitude test at the same airport in 2009. On 10 January 2006, it flew to José María Córdova International Airport in Colombia, accomplishing the transatlantic testing, and then it went to El Dorado International Airport to test the engine operation in high-altitude airports. It arrived in North America on 6 February 2006, landing in Iqaluit, Nunavut, in Canada for cold-weather testing. On 14 February 2006, during the destructive wing strength certification test on MSN5000, the test wing of the A380 failed at 145% of the limit load, short of the required 150% level. Airbus announced modifications adding 30 kg (66 lb) to the wing to provide the required strength. On 26 March 2006, the A380 underwent evacuation certification in Hamburg. With 8 of the 16 exits randomly blocked, 853 mixed passengers and 20 crew exited the darkened aircraft in 78 seconds, less than the 90 seconds required for certification. Three days later, the A380 received European Aviation Safety Agency (EASA) and United States Federal Aviation Administration (FAA) approval to carry up to 853 passengers. The first A380 using GP7200 engines—serial number MSN009 and flew on 25 August 2006. On 4 September 2006, the first full passenger-carrying flight test took place. The aircraft flew from Toulouse with 474 Airbus employees on board, in a test of passenger facilities and comfort. In November 2006, a further series of route-proving flights demonstrated the aircraft's performance for 150 flight hours under typical airline operating conditions. , the A380 test aircraft continue to perform test procedures. Airbus obtained type certificates for the A380-841 and A380-842 model from the EASA and FAA on 12 December 2006 in a joint ceremony at the company's French headquarters, receiving the ICAO code A388. The A380-861 model was added to the type certificate on 14 December 2007. Production and delivery delays Initial production of the A380 was troubled by delays attributed to the of wiring in each aircraft. Airbus cited as underlying causes the complexity of the cabin wiring (98,000 wires and 40,000 connectors), its concurrent design and production, the high degree of customisation for each airline, and failures of configuration management and change control. The German and Spanish Airbus facilities continued to use CATIA version 4, while British and French sites migrated to version 5. This caused overall configuration management problems, at least in part because wire harnesses manufactured using aluminium rather than copper conductors necessitated special design rules including non-standard dimensions and bend radii; these were not easily transferred between versions of the software. File conversion tools were initially developed by Airbus to help solve this problem; however, the digital mock-up was still unable to read the full technical design data. Furthermore, organisational culture was also cited as a cause of the production delays. The communication and reporting culture at the time frowned upon delivery of bad news, meaning Airbus was unable to take early actions to mitigate technical and production issues. Airbus announced the first delay in June 2005 and notified airlines that deliveries would be delayed by six months. This reduced the total number of planned deliveries by the end of 2009 from about 120 to 90–100. On 13 June 2006, Airbus announced a second delay, with the delivery schedule slipping an additional six to seven months. Although the first delivery was still planned before the end of 2006, deliveries in 2007 would drop to only 9 aircraft, and deliveries by the end of 2009 would be cut to 70–80 aircraft. The announcement caused a 26% drop in the share price of Airbus' parent, EADS, and led to the departure of EADS CEO Paul Dupont, Airbus CEO Gustav Humbert, and A380 programme manager Charles Champion. On 3 October 2006, upon completion of a review of the A380 programme, Airbus CEO Christian Streiff announced a third delay, pushing the first delivery to October 2007, to be followed by 13 deliveries in 2008, 25 in 2009, and the full production rate of 45 aircraft per year in 2010. The delay also increased the earnings shortfall projected by Airbus through 2010 to €4.8 billion. As Airbus prioritised the work on the A380-800 over the A380F, freighter orders were cancelled by FedEx and United Parcel Service, or converted to A380-800 by Emirates and ILFC. Airbus suspended work on the freighter version, but said it remained on offer, albeit without a service entry date. For the passenger version Airbus negotiated a revised delivery schedule and compensation with the 13 customers, all of which retained their orders with some placing subsequent orders, including Emirates, Singapore Airlines, Qantas, Air France, Qatar Airways, and Korean Air. Beginning in 2007, the A380 was considered as a potential replacement for the existing Boeing VC-25 serving as Air Force One presidential transport, but in January 2009 EADS declared that they were not going to bid for the contract, as assembling only three planes in the US would not make financial sense. On 13 May 2008, Airbus announced reduced deliveries for the years 2008 (12) and 2009 (21). After further manufacturing setbacks, Airbus announced its plan to deliver 14 A380s in 2009, down from the previously revised target of 18. A total of 10 A380s were delivered in 2009. In 2010, Airbus delivered 18 of the expected 20 A380s, due to Rolls-Royce engine availability problems. Airbus planned to deliver "between 20 and 25" A380s in 2011 before ramping up to three a month in 2012. In fact, Airbus delivered 26 units, thus outdoing its predicted output for the first time. , production was 3 aircraft per month. Among the production problems are challenging interiors, interiors being installed sequentially rather than concurrently as in smaller planes, and union/government objections to streamlining. Entry into service Nicknamed Superjumbo, the first A380, MSN003, was delivered to Singapore Airlines on 15 October 2007 and entered service on 25 October 2007 with flight number SQ380 between Singapore and Sydney. Passengers bought seats in a charity online auction paying between $560 and $100,380. Two months later, Singapore Airlines CEO Chew Choong Seng stated the A380 was performing better than either the airline or Airbus had anticipated, burning 20% less fuel per seat-mile than the airline's 747-400 fleet. Emirates' Tim Clark claimed that the A380 has better fuel economy at Mach 0.86 than at 0.83, and that its technical dispatch reliability is at 97%, the same as Singapore Airlines. Airbus is committed to reach the industry standard of 98.5%. Emirates was the second airline to receive the A380 and commenced service between Dubai and New York in August 2008. Qantas followed, with flights between Melbourne and Los Angeles in October 2008. By the end of 2008, 890,000 passengers had flown on 2,200 flights. In February 2008, the A380 became the first airliner to fly using synthetic liquid fuel. The fuel is processed from gas to liquid form (GTL fuel). The flight was 3 hours long, taking off from Filton, UK, and landing in Toulouse, France, and was a significant step in evaluating the suitability of sustainable aviation fuels. Improvements and upgrades In 2010, Airbus announced a new A380 build standard, incorporating a strengthened airframe structure and a 1.5° increase in wing twist. Airbus also offered, as an option, an improved maximum take-off weight, thus providing a better payload/range performance. Maximum take-off weight is increased by , to and the range is extended by ; this is achieved by reducing flight loads, partly from optimising the fly-by-wire control laws. British Airways and Emirates were the first two customers to have received this new option in 2013. Emirates asked for an update with new engines for the A380 to be competitive with the Boeing 777X around 2020, and Airbus was studying 11-abreast seating. In 2012, Airbus announced another increase in the A380's maximum take-off weight to , a 6 t increase from the initial A380 variant and 2 t higher than the increased-weight proposal of 2010. This increased the range by some , taking its capability to around at current payloads. The higher-weight version was offered for introduction to service early in 2013. Post-delivery problems During repairs following the Qantas Flight 32 engine failure incident, cracks were discovered in wing fittings. As a result, the European Aviation Safety Agency issued an Airworthiness Directive in January 2012 which affected 20 A380 aircraft that had accumulated over 1,300 flights. A380s with under 1,800 flight hours were to be inspected within 6 weeks or 84 flights; aircraft with over 1,800 flight hours were to be examined within four days or 14 flights. Fittings found to be cracked were replaced. On 8 February 2012, the checks were extended to cover all 68 A380 aircraft in operation. The problem is considered to be minor and is not expected to affect operations. EADS acknowledged that the cost of repairs would be over $130 million, to be borne by Airbus. The company said the problem was traced to stress and material used for the fittings. Additionally, major airlines are seeking compensation from Airbus for revenue lost as a result of the cracks and subsequent grounding of fleets. Airbus has switched to a different type of aluminium alloy so aircraft delivered from 2014 onwards should not have this problem. Around 2014, Airbus changed about 10% of all A380 doors, as some leaked during flight. One occurrence resulted in dropped oxygen masks and an emergency landing. The switch was estimated to cost over €100 million. Airbus stated that safety was sufficient, as the air pressure pushed the door into the frame. Further continuation of programme At the July 2016 Farnborough Airshow, Airbus announced that in a "prudent, proactive step", starting in 2018, it expected to deliver 12 A380 aircraft per year, down from 27 deliveries in 2015. The firm also warned production might slip back into red ink (be unprofitable) on each aircraft produced at that time, though it anticipated production would remain in the black (profitable) for 2016 and 2017. "The company will continue to improve the efficiency of its industrial system to achieve breakeven at 20 aircraft in 2017 and targets additional cost reduction initiatives to lower breakeven further." Airbus expected that healthy demand for its other aircraft would allow it to avoid job losses from the cuts. As Airbus expected to build 15 airliners in 2017 and 12 in 2018, Airbus Commercial Aircraft president Fabrice Brégier said that, without orders in 2017, production would be reduced to below one per month while remaining profitable per unit and allowing the programme to continue for 20 to 30 years. In its 2017 half-year report, Airbus adjusted 2019 deliveries to eight aircraft. In November 2017, its chief executive Tom Enders was confident Airbus would still produce A380s in 2027 with more sales to come, and further develop it to keep it competitive beyond 2030. Airbus was profitable at a rate of 15 per year and is trying to drive breakeven down further but will take losses at eight per year. An order from Emirates for 36 A380s would have ensured production beyond 2020, but the airline wanted guarantees that production would be maintained for 10 years, until 2028: reducing output to six a year would help to bridge that period and would support second-hand values while other buyers are approached, but the programme would still be unprofitable. If it had failed to win the Emirates order, Airbus claimed that it was ready to phase out its production gradually as it fulfilled remaining orders until the early 2020s. In January 2018, Emirates confirmed the order for 36 A380s, but the deal was thrown back into question in October 2018 over a disagreement regarding engine fuel burn. To extend the programme, Airbus offered China a production role in early 2018. While state-owned Chinese airlines could order A380s, it would not help their low yield, as it lowers frequency; they do not need more volume as widebody aircraft are already used on domestic routes and using the A380 on its intended long-haul missions would free only a few airport slots. After achieving efficiencies to sustain production at a lower level, in 2017, Airbus delivered 15 A380s and was "very close" to production breakeven, expecting to make additional savings as production was being further reduced: it planned to deliver 12 in 2018, eight in 2019 and six per year from 2020 with "digestible" losses. , Enders was confident the A380 would gain additional orders from existing or new operators, and saw opportunities in Asia and particularly in China where it is "under-represented". In 2019, Lufthansa had retired 6 of its 14 A380s due to their unprofitability. Later that year, Qatar Airways announced a switch from the A380 to the Boeing 777X starting from 2024. End of production In February 2019, Airbus announced it would end A380 production by 2021, after its main customer, Emirates, agreed to drop an order for 39 of the aircraft, replacing it with 40 A330-900s and 30 A350-900s. At the time of the announcement, Airbus had 17 more A380s on its order book to complete before closing the production line14 for Emirates and three for All Nippon Airwaystaking the total number of expected deliveries of the aircraft type to 251. Airbus would have needed more than $90 million profit from the sale of each aircraft to cover the estimated $25 billion development cost of the programme. However, the $445 million price tag of each aircraft was not sufficient to even cover the production cost. With orders decreasing, the decision was made to cease production. Enders stated on 14 February 2019, "If you have a product that nobody wants anymore, or you can sell only below production cost, you have to stop it." One reason that the A380 did not achieve commercial viability for Airbus has been attributed to its extremely large capacity being optimised for a hub-and-spoke system, which was projected by Airbus to be thriving when the programme was conceived. However, airlines underwent a fundamental transition to a point-to-point system, which gets customers to their destination in one flight instead of two or three. The massive scale of the A380 design was able to achieve a very low cost for passenger seat-distance, but efficiency within the hub-and-spoke paradigm was not able to overcome the efficiency of fewer flights required in the point-to-point system. Specifically, US based carriers had been using a multihub strategy, which only justified the need for a handful of VLAs (very large aircraft with more than 400 seats) such as the A380, and having too few VLAs meant that they could not achieve economy of scale to spread out the enormous fixed cost of the VLA support infrastructure. Consequently, orders for VLAs slowed in the mid 2010s, as widebody twin jets now offer similar range and greater fuel efficiency, giving airlines more flexibility at a lower upfront cost. On 25 September 2020, Airbus completed assembly of the final A380 fuselage. Nine aircraft remained to be delivered (eight for Emirates, one for All Nippon Airways) and production operations continued to finish those aircraft. On 17 March 2021, the final Airbus A380 (manufacturing serial number 272) made its maiden flight from Toulouse to Hamburg for cabin outfitting, before being delivered to Emirates on 16 December 2021. Design Overview The A380 was initially offered in two models: the A380-800 and the A380F. The A380-800's original configuration carried 555 passengers in a three-class configuration or 853 passengers (538 on the main deck and 315 on the upper deck) in a single-class economy configuration. Then in May 2007, Airbus began marketing a configuration with 30 fewer passengers (525 total in three classes)—traded for more range—to better reflect trends in premium-class accommodation. The design range for the A380−800 model is ; capable of flying from Hong Kong to New York or from Sydney to Istanbul non-stop. The A380 is designed for 19,000 cycles. The second model, the A380F freighter, would have carried of cargo over a range of . Freighter development was put on hold as Airbus prioritised the passenger version, and all orders for freighters were cancelled. Other proposed variants included an A380-900 stretchseating about 656 passengers (or up to 960 passengers in an all-economy configuration)and an extended-range version with the same passenger capacity as the A380-800. Engines The A380 is offered with the Rolls-Royce Trent 900 (A380-841/-842) or the Engine Alliance GP7000 (A380-861) turbofan engines. The Trent 900 is a combination of the fan and scaled compressor of the 777-200X/300X Trent 8104 technology demonstrator derived from the Boeing 777's Trent 800, and the Airbus A340-500/600's Trent 500 core. The GP7200 core technology is derived from GE's GE90 and its sections are based on the PW4000 expertise. At its launch in 2000, engine makers assured Airbus it was getting the best level of technology and they would be state-of-the-art for the next decade, but three years later Boeing launched the 787 Dreamliner with game-changing technology and 10% lower fuel burn than the previous generation, to the dismay of John Leahy. Due to its modern engines and aerodynamic improvements, Lufthansa's A380s produce half the noise of the Boeing 747-200 while carrying 160 more passengers. In 2012, the A380 received an award from the Noise Abatement Society. London Heathrow is a key destination for the A380. The aircraft is below the QC/2 departure and QC/0.5 arrival noise limits under the Quota Count system set by the airport. Field measurements suggest the approach quota allocation for the A380 may be excessively generous compared to the older Boeing 747, but still quieter. Rolls-Royce is supporting the CAA in understanding the relatively high A380/Trent 900 monitored noise levels. Heathrow's landing charges having a noise component, the A380 is cheaper to land there than a Boeing 777-200 and -300 and it saves $4,300 to $5,200 per landing, or $15.3M to $18.8M of present value over 15 years. Tokyo Narita has a similar noise charge. The A380 has thrust reversers on the inboard engines only. The outboard engines lack them, reducing the amount of debris stirred up during landing. The combination of wheel braking and large spoilers and flaps reduces the aircraft's reliance on thrust reversal. The reversers are electrically actuated to save weight, and for greater reliability than pneumatic or hydraulic equivalents. Having reversers on only two engines also saves a great deal of maintenance expense for operators as well as avoiding unnecessary weight to the outboard engines. Wings The A380's wings are built for a maximum takeoff weight (MTOW) over 600 tonnes to accommodate larger variants—the A380F freighter would require added internal strengthening. The optimal wingspan for such an MTOW is about but airport restrictions of force the A380 to compensate with a longer chord for an aspect ratio of 7.8. This suboptimal aspect ratio reduces fuel efficiency by about 10% and increases operating costs several percent, considering fuel costs constitute about 50% of the cost of long-haul aeroplane operation. The common wing design approach sacrifices fuel efficiency on the A380-800 passenger model in particular because its lower MTOW allows for a higher aspect ratio with a shorter chord or thinner wing. Still, Airbus estimated that the A380's size and advanced technology would provide lower operating costs per passenger than the 747-400. The wings incorporate wingtip fences that extend above and below the wing surface, similar to those on the A310 and A320. These increase fuel efficiency and range by reducing induced drag. The wingtip fences also reduce wake turbulence, which endangers following aircraft. The wings of the A380 were designed in Filton and manufactured in Broughton in the United Kingdom. The wings were then transported to the harbour of Mostyn, where they were transported by barge to Toulouse, France, for integration and final assembly with the rest of the aircraft and its components. Singapore Airlines describe the A380's landing speed of as "impressively slow". Materials While most of the fuselage is made of aluminium alloys, composite materials comprise more than 20% of the A380's airframe. Carbon-fibre reinforced plastic, glass-fibre reinforced plastic and quartz-fibre reinforced plastic are used extensively in wings, fuselage sections (such as the undercarriage and rear end of fuselage), tail surfaces, and doors. The A380 is the first commercial airliner to have a central wing box made of carbon–fibre reinforced plastic. It is also the first to have a smoothly contoured wing cross–section. The wings of other commercial airliners are partitioned span-wise into sections. This flowing continuous cross section reduces aerodynamic drag. Thermoplastics are used in the leading edges of the slats. The hybrid fibre metal laminate material GLARE (glass laminate aluminium reinforced epoxy) is used in the upper fuselage and on the stabilisers' leading edges. This aluminium-glass-fibre laminate is lighter and has better corrosion and impact resistance than conventional aluminium alloys used in aviation. Unlike earlier composite materials, GLARE can be repaired using conventional aluminium repair techniques. Newer weldable aluminium alloys are used in the A380's airframe. This enabled the widespread use of laser beam welding manufacturing techniques, eliminating rows of rivets and resulting in a lighter, stronger structure. High-strength aluminium (type 7449) reinforced with carbon fibre was used in the wing brackets of the first 120 A380s to reduce weight, but cracks were discovered and newer sets of the more critical brackets are made of standard aluminium 7010, increasing weight by 90 kg (198 lb). Repair costs for earlier aircraft were expected to be around €500 million (US$629 million). It takes of paint to cover the exterior of an A380. The paint is five layers thick and weighs about 650 kg (1,433 lb) when dry. Avionics The A380 employs an integrated modular avionics (IMA) architecture, first used in advanced military aircraft, such as the Lockheed Martin F-22 Raptor, Lockheed Martin F-35 Lightning II, and Dassault Rafale. The main IMA systems on the A380 were developed by the Thales Group. Designed and developed by Airbus, Thales and Diehl Aerospace, the IMA suite was first used on the A380. The suite is a technological innovation, with networked computing modules to support different applications. The data networks use Avionics Full-Duplex Switched Ethernet, an implementation of ARINC 664. These are switched, full-duplex, star-topology and based on 100baseTX fast-Ethernet. This reduces the amount of wiring required and minimises latency. Airbus used similar cockpit layout, procedures and handling characteristics to other Airbus aircraft, reducing crew training costs. The A380 has an improved glass cockpit, using fly-by-wire flight controls linked to side-sticks. The cockpit has eight liquid crystal displays, all physically identical and interchangeable; comprising two primary flight displays, two navigation displays, one engine parameter display, one system display and two multi-function displays. The MFDs were introduced on the A380 to provide an easy-to-use interface to the flight management system—replacing three multifunction control and display units. They include QWERTY keyboards and trackballs, interfacing with a graphical "point-and-click" display system. The Network Systems Server (NSS) is the heart of A380s paperless cockpit; it eliminates bulky manuals and traditional charts. The NSS has enough inbuilt robustness to eliminate onboard backup paper documents. The A380s network and server system stores data and offers electronic documentation, providing a required equipment list, navigation charts, performance calculations, and an aircraft logbook. This is accessed through the MFDs and controlled via the keyboard interface. Systems Power-by-wire flight control actuators have been used for the first time in civil aviation to back up primary hydraulic actuators. Also, during certain manoeuvres they augment the primary actuators. They have self-contained hydraulic and electrical power supplies. Electro-hydrostatic actuators (EHA) are used in the aileron and elevator, electric and hydraulic motors to drive the slats as well as electrical backup hydrostatic actuators (EBHA) for the rudder and some spoilers. The A380's 350 bar (35 MPa or 5,000 psi) hydraulic system is a significant difference from the typical 210 bar (21 MPa or 3,000 psi) hydraulics used on most commercial aircraft since the 1940s. First used in military aircraft, high-pressure hydraulics reduce the weight and size of pipelines, actuators and related components. The 350 bar pressure is generated by eight de-clutchable hydraulic pumps. The hydraulic lines are typically made from titanium; the system features both fuel- and air-cooled heat exchangers. Self-contained electrically powered hydraulic power packs serve as backups for the primary systems, instead of a secondary hydraulic system, saving weight and reducing maintenance. The A380 uses four 150 kVA variable-frequency electrical generators, eliminating constant-speed drives and improving reliability. The A380 uses aluminium power cables instead of copper for weight reduction. The electrical power system is fully computerised and many contactors and breakers have been replaced by solid-state devices for better performance and increased reliability. The auxiliary power comprises the Auxiliary Power Unit (APU), the electronic control box (ECB), and mounting hardware. The APU in use on the A380 is the PW 980A APU. The APU primarily provides air to power the Analysis Ground Station (AGS) on the ground and to start the engines. The AGS is a semi-automatic analysis system of flight data that helps to optimise management of maintenance and reduce costs. The APU also powers two 120 kVA electric generators that provide auxiliary electric power to the aircraft. There is also a ram air turbine (RAT) with a 70 kVA generator. Passenger provisions The A380-800's cabin has of usable floor space, 40% more than the next largest airliner, the Boeing 747-8. The cabin has features to reduce traveller fatigue such as a quieter interior and higher pressurisation than previous generations of aircraft; the A380 is pressurised to the equivalent altitude of up to . It has 50% less cabin noise, 50% more cabin area and volume, larger windows, bigger overhead bins, and more headroom than the 747-400. Seating options range from 3-room "residence" in first class to 11-across in economy. A380 economy seats are up to wide in a 10-abreast configuration, compared with the 10-abreast configuration on the 747-400 that typically has seats wide. On other aircraft, economy seats range from in width. The A380's upper and lower decks are connected by two stairways, one fore and one aft, both wide enough to accommodate two passengers side by side; this cabin arrangement allows multiple seat configurations. The maximum certified carrying capacity is 853 passengers in an all-economy-class layout, Airbus lists the "typical" three-class layout as accommodating 525 passengers, with 10 first, 76 business, and 439 economy class seats. Airline configurations range from Korean Air's 407 passengers to Emirates' two-class 615 seats and average around 480–490 seats. Air Austral's proposed 840 passenger layout has not come to fruition. The A380's interior illumination system uses bulbless LEDs in the cabin, cockpit, and cargo decks. The LEDs in the cabin can be altered to create an ambience simulating daylight, night, or intermediate levels. On the outside of the aircraft, HID lighting is used for brighter illumination. Airbus's publicity has stressed the comfort and space of the A380 cabin, and advertised onboard relaxation areas such as bars, beauty salons, duty-free shops, and restaurants. Proposed amenities resembled those installed on earlier airliners, particularly 1970s wide-body jets, which largely gave way to regular seats for greater passenger capacity. Airbus has acknowledged that some cabin proposals were unlikely to be installed, and that it was ultimately the airlines' decision how to configure the interior. Industry analysts suggested that implementing customisation has slowed the production speeds, and raised costs. Due to delivery delays, Singapore Airlines and Air France debuted their seat designs on different aircraft prior to the A380. Initial operators typically configured their A380s for three-class service, while adding extra features for passengers in premium cabins. Launch customer Singapore Airlines introduced partly enclosed first-class suites on its A380s in 2007, each featuring a leather seat with a separate bed; center suites could be joined to create a double bed. A year later, Qantas debuted a new first-class seat-bed and a sofa lounge at the front of the upper deck on its A380s, and in 2009, Air France unveiled an upper deck electronic art gallery. In late 2008, Emirates introduced "shower spas" in first class on its A380s allowing each first class passenger five minutes of hot water, drawing on 2.5 tonnes of water, although only 60% of it was used. Etihad Airways and Qatar Airways also have a bar lounge and seating area on the upper deck, while Etihad has enclosed areas for two people each. In addition to lounge areas, some A380 operators have installed amenities consistent with other aircraft in their respective fleets, including self-serve snack bars, premium economy sections, and redesigned business-class seating. The Hamburg Aircraft Interiors Expo in April 2015 saw the presentation of an 11-seat row economy cabin for the A380. Airbus is reacting to a changing economy; the recession which began in 2008 saw a drop in market percentage of first class and business seats to six percent and an increase in budget economy travellers. Among other causes is the reluctance of employers to pay for executives to travel in First or Business Class. Airbus' chief of cabin marketing, Ingo Wuggestzer, told Aviation Week and Space Technology that the standard three-class cabin no longer reflected market conditions. The 11-seat row on the A380 is accompanied by similar options on other widebodies: nine across on the Airbus A330 and ten across on the A350. Integration with infrastructure and regulations Ground operations In the 1990s, aircraft manufacturers were planning to introduce larger planes than the Boeing 747. In a common effort of the International Civil Aviation Organization (ICAO) with manufacturers, airports and its member agencies, the "80-metre box" was created, the airport gates allowing planes up to wingspan and length to be accommodated. Airbus designed the A380 according to these guidelines, and to operate safely on Group V runways and taxiways with a loadbearing width. The US FAA initially opposed this, then in July 2007, the FAA and EASA agreed to let the A380 operate on runways without restrictions. The A380-800 is approximately 30% larger in overall size than the 747-400. Runway lighting and signage may need changes to provide clearance to the wings and avoid blast damage from the engines. Runways, runway shoulders and taxiway shoulders may be required to be stabilised to reduce the likelihood of foreign object damage caused to (or by) the outboard engines, which are more than from the centre line of the aircraft, compared to for the 747-400, and 747-8. Airbus measured pavement loads using a 540-tonne (595 short tons) ballasted test rig, designed to replicate the landing gear of the A380. The rig was towed over a section of pavement at Airbus's facilities that had been instrumented with embedded load sensors. It was determined that the pavement of most runways will not need to be reinforced despite the higher weight, as it is distributed on more wheels than in other passenger aircraft with a total of 22 wheels (that is, its ground pressure is lower). The A380 undercarriage consists of four main landing gear legs and one noseleg (a layout similar to that of the 747), with the two inboard landing gear legs each supporting six wheels. The A380 requires service vehicles with lifts capable of reaching the upper deck, as well as tractors capable of handling the A380's maximum ramp weight. When using two jetway bridges the boarding time is 45 min, and when using an extra jetway to the upper deck it is reduced to 34 min. The A380 has an airport turnaround time of 90–110 minutes. In 2008, the A380 test aircraft were used to trial the modifications made to several airports to accommodate the type. Takeoff and landing separation As of 2023, the A380 is the only aircraft in wake turbulence category Super (J). In 2005, the ICAO recommended that provisional separation criteria for the A380 on takeoff and landing be substantially greater than for the 747 because preliminary flight test data suggested a stronger wake turbulence. These criteria were in effect while the ICAO's wake vortex steering group, with representatives from the JAA, Eurocontrol, the FAA, and Airbus, refined its 3-year study of the issue with additional flight testing. In September 2006, the working group presented its first conclusions to the ICAO. In November 2006, the ICAO issued new interim recommendations. Replacing a blanket separation for aircraft trailing an A380 during approach, the new distances were , and respectively for non-A380 "Heavy", "Medium", and "Light" ICAO aircraft categories. These compared with the , and spacing applicable to other "Heavy" aircraft. Another A380 following an A380 should maintain a separation of . On departure behind an A380, non-A380 "Heavy" aircraft are required to wait two minutes, and "Medium"/"Light" aircraft three minutes for time based operations. The ICAO also recommends that pilots append the term "Super" to the aircraft's callsign when initiating communication with air traffic control, to distinguish the A380 from "Heavy" aircraft. In August 2008, the ICAO issued revised approach separations of for Super (another A380), for Heavy, for medium/small, and for light. In November 2008, an incident on a parallel runway during crosswinds made the Australian authorities change procedures for those conditions. Maintenance As the A380 fleet grows older, airworthiness authority rules require certain scheduled inspections from approved aircraft tool shops. The increasing fleet size (at the time projected to reach 286 aircraft in 2020) cause expected maintenance and modification to cost $6.8 billion for 2015–2020, of which $2.1 billion are for engines. Emirates performed its first 3C-check for 55 days in 2014. During lengthy shop stays, some airlines will use the opportunity to install new interiors. Operational history In February 2009, the one millionth passenger was flown with Singapore Airlines and by May of that year 1,500,000 passengers had flown on 4,200 flights. Air France received its first A380 in October 2009. Lufthansa received its first A380 in May 2010. By July 2010, the 31 A380s then in service had transported 6 million passengers on 17,000 flights between 20 international destinations. Airbus delivered the 100th A380 on 14 March 2013 to Malaysia Airlines. In June 2014, over 65 million passengers had flown the A380, and more than 100 million passengers (averaging 375 per flight) by September 2015, with an availability of 98.5%. In 2014, Emirates stated that its A380 fleet had load factors of 90–100%, and that the popularity of the aircraft with its passengers had not decreased in the past year. On 16 December 2021, their largest customer, Emirates, received its 123rd A380 in Hamburg, which was the 251st and the last Superjumbo delivered by Airbus. The airline's strategy has enabled A380 teams to develop new innovations on an ongoing basis and improve the aircraft's operational performance by up to 99.3%, a level never seen before on a quadjet airliner. Many of the innovations developed on the Emirates A380 cabin were a first for Airbus, such as the first class showers, lighting scenarios, and the recent premium economy cabin. The close collaboration has shaped the identity of the A380 over the years and continues to transform the passenger experience today. , the global A380 fleet had carried over 300 million passengers to more than 70 destinations and completed more than 800,000 flights over 7.3 million block hours with 99 percent operational reliability and no hull-loss accidents. Over 50% of A380 capacity is from/to/within the Asia-Pacific region, of which around 15% is on regional flights within Asia (OAG 2017). Proposed variants While the A380-800 was the only model put into production, other variants were proposed that might have made the design more appealing in shifting market conditions. A380F Airbus offered a cargo aircraft variant, called the A380F, since at least June 2005, capable of transporting a maximum payload over a range. It would have had 7% better payload and better range than the Boeing 747-8F, but also higher trip costs. It would have the largest payload capacity of any freighter aircraft except the Antonov An-225 Mriya. Production was suspended until the A380 production lines had settled, with no firm availability date. The A380F was displayed on the Airbus website until at least January 2013, but was not anymore in April. A patent for a "combi" version was applied for. This version would offer the flexibility of carrying both passengers and cargo, along with being rapidly reconfigurable to expand or contract the cargo area and passenger area as needed for a given flight. A380 Stretch, A380-900 At launch in December 2000, a 656-seat A380-200 was proposed as a derivative of the 555-seat baseline, called the A380 Stretch. In November 2007, Airbus top sales executive and chief operating officer John Leahy confirmed plans for another enlarged variant—the A380-900—with more seating space than the A380-800. The A380-900 would have had a seating capacity for 650 passengers in standard configuration and for approximately 900 passengers in an economy-only configuration. Airlines that expressed an interest in the A380-900 included Emirates, Virgin Atlantic, Cathay Pacific, Air France, KLM, Lufthansa, Kingfisher Airlines, and leasing company ILFC. In May 2010, Airbus announced that A380-900 development would be postponed until production of the A380-800 stabilised. On 11 December 2014, at the annual Airbus Investor Day forum, Airbus CEO Fabrice Bregier controversially announced, "We will one day launch an A380neo and one day launch a stretched A380". This statement followed speculation sparked by Airbus CFO Harald Wilhelm that Airbus could possibly axe the A380 ahead of its time due to softening demand. On 15 June 2015, John Leahy, Airbus's chief operating officer for customers, stated that Airbus was again looking at the A380-900 programme. Airbus's newest concept would be a stretch of the A380-800 offering 50 seats more—not 100 seats as originally envisaged. This stretch would be tied to a potential re-engining of the A380-800. According to Flight Global, an A380-900 would make better use of the A380's existing wing. A380neo On 15 June 2015, Reuters reported that Airbus was discussing an improved and stretched version of the A380 with at least six customers. The aircraft, called the A380neo, featured new engines and would accommodate an additional fifty passengers. Deliveries to customers were planned for sometime in 2020 or 2021. On 19 July 2015, Airbus CEO Fabrice Brégier stated that the company will build a new version of the A380 featuring new improved wings and new engines. Speculation about the development of a so-called A380neo ("neo" for "new engine option") had been going on for a few months after earlier press releases in 2014, and in 2015, the company was considering whether to end production of the type prior to 2018 or develop a new A380 variant. Later it was revealed that Airbus was looking at both the possibility of a longer A380 in line of the previously planned A380-900 and a new engine version, i.e. A380neo. Brégier also revealed that the new variant would be ready to enter service by 2020. The engine would most likely be one of a variety of all-new options from Rolls-Royce, ranging from derivatives of the A350's XWB-84/97 to the future Advance project due at around 2020. On 3 June 2016, Emirates President Tim Clark stated that talks between Emirates and Airbus on the A380neo have "lapsed". On 12 June 2017, Fabrice Brégier confirmed that Airbus would not launch an A380neo, stating "...there is no business case to do that, this is absolutely clear." However, Brégier stated it would not stop Airbus from looking at what could be done to improve the performance of the aircraft. One such proposal is a wingspan extension to reduce drag and increase fuel efficiency by 4%, though further increase is likely to be seen on the aircraft with new Sharklets like on the A380plus. Tim Clark stated the proposed re-engining would have offered a 12–14% fuel-burn reduction with an enhanced Trent XWB. In June 2023, despite A380 production having ceased, Clark renewed his plea for a re-engined A380neo, suggesting that a next-generation Rolls-Royce UltraFan could give a 25% reduction in fuel burn and emissions. A380plus At the June 2017 Paris Air Show, Airbus proposed an enhanced variant, called the A380plus, with 13% lower costs per seat, featuring up to 80 more seats through better use of cabin space, split scimitar winglets and wing refinements allowing a 4% fuel economy improvement, and longer aircraft maintenance intervals with less downtime. The A380plus' maximum takeoff weight would have been increased by to , allowing it to carry more passengers over the same range or increase the range by . Winglet mockups, high, were displayed on the MSN04 test aircraft at Le Bourget. Wing twist would have been modified and camber changed by increasing its height by between Rib 10 and Rib 30, along with upper-belly fairing improvements. The in-flight entertainment, the flight management system and the fuel pumps would be from the A350 to reduce weight and improve reliability and fuel economy. Light checks for the A380plus would be required after 1,000 h instead of 750 h and heavy check downtime would be reduced to keep the aircraft flying for six days more per year. Market Size In its 2000 Global Market Forecast, Airbus estimated a demand for 1,235 passenger Very Large Aircraft (VLA) with more than 400 seats: 360 up to 2009 and 875 by 2019. In late 2003, Boeing forecast 320 "Boeing 747 and larger" passenger aircraft over 20 years, close to the 298 orders actually placed for the A380 and 747-8 passenger airliners as of March 2020. In 2007, Airbus estimated a demand for 1,283 VLAs in the following 20 years if airport congestion remains constant, up to 1,771 VLAs if congestion increases, with most deliveries (56%) in Asia-Pacific, and 415 very large, 120-tonne plus freighters. For the same period, Boeing was estimating the demand for 590 large (747 or A380) passenger airliners and 630 freighters. Estimates for the total over a twenty-year period have varied from 400 to over 1,700. Frequency and capacity In 2013, Cathay Pacific and Singapore Airlines needed to balance frequency and capacity. China Southern struggled for two years to use its A380s from Beijing, and finally received Boeing 787s in its base in Guangzhou, but where it cannot command a premium, unlike Beijing or Shanghai. In 2013, Air France withdrew A380 services to Singapore and Montreal and switched to smaller aircraft. In 2014, British Airways replaced three 777 flights between London and Los Angeles with two A380 per day. Emirates' Tim Clark saw a large potential for East Asian A380-users, and criticised Airbus' marketing efforts. As many business travellers prefer more choices offered by greater flight frequency achieved by flying any given route multiple times on smaller aircraft, rather than fewer flights on larger planes, United Airlines observed the A380 "just doesn't really work for us" with a much higher trip cost than the Boeing 787. At the A380 launch, most Europe-Asia and transpacific routes used Boeing 747-400s at fairly low frequencies but, since then, routes proliferated with open skies, and most airlines downsized, offering higher frequencies and more routes. The huge capacity offered by each flight eroded the yield: North America was viewed as 17% of the market but the A380 never materialised as a 747 replacement, with only 15 747s remaining in passenger service in November 2017 for transpacific routes, where time zones restrict potential frequency. Consolidation changed the networks, and US majors constrained capacity and emphasised daily frequencies for business traffic with midsize widebodies like the 787, to extract higher yields; the focus being on profits, with market share ceded to Asian carriers. The 747 was largely replaced on transatlantic flights by the 767, and on the transpacific flights by the 777; newer, smaller aircraft with similar seat-mile costs have lower trip costs and allow more direct routes. Cabin 'densification', to lower unit costs, could aggravate this overcapacity. Production In 2005, 270 sales were necessary to attain break-even and with 751 expected deliveries its internal rate of return outlook was at 19%, but due to disruptions in the ramp-up leading to overcosts and delayed deliveries, it increased to 420 in 2006. In 2010, EADS CFO Hans Peter Ring said that break-even could be achieved by 2015 when 200 deliveries were projected. In 2012, Airbus clarified that the aircraft production costs would be less than its sales price. On 11 December 2014, Airbus chief financial officer Harald Wilhelm hinted the possibility of ending the programme in 2018, disappointing Emirates president Tim Clark. Airbus shares fell down consequently. Airbus responded to the protests by playing down the possibility the A380 would be abandoned, instead emphasising that enhancing the aeroplane was a likelier scenario. On 22 December 2014, as the jet was about to break even, Airbus CEO Fabrice Brégier ruled out cancelling it. Ten years after its first flight, Brégier said it was "almost certainly introduced ten years too early". While no longer losing money on each plane sold, Airbus admits that the company will never recoup the $25 billion investment it made in the project. Airbus consistently forecast 1,400 VLA demand over 20-year, still in 2017, and aimed to secure a 50% share, up to 700 units, but delivered 215 aircraft in 10 years, achieving three produced per month but not the four per month target after the ramp-up to achieve more than 350 and is now declining to 0.5 a month. As Boeing see the VLA market as too small to retain in its 2017 forecast, its VP marketing Randy Tinseth does not believe Airbus will deliver the rest of the backlog. Richard Aboulafia predicted a 2020 final delivery, with unpleasant losses due to "hubris, shoddy market analysis, nationalism and simple wishful thinking". In 2017, the A380 fleet exceeded the number of remaining passenger B747s, which had declined from 740 aircraft when the A380 was launched in 2000 to 550 units when the A380 was introduced in 2007, and around 200 ten years later. However, the market-share battle has shifted to large single-aisles and 300-seat twin-aisles. Cost , the list price of an A380 was US$432.6 million. Negotiated discounts made the actual prices much lower, and industry experts questioned whether the A380 project would ever pay for itself. The first aircraft was sold and leased back by Singapore Airlines in 2007 to Dr. Peters for $197 million. In 2016, IAG's Willie Walsh said he could add a few, but also that he found the price of new aircraft "outrageous" and would source them from the second-hand market. AirInsight estimates its hourly cost at $26,000, or around $50 per seat hour (when configured for only seats), which compares to $44 per seat hour for a Boeing 777-300ER, and $90 per seat hour for a Boeing 747-400 . The A380 was designed with large wing and tail surfaces to accommodate a planned stretch; this resulted in a high empty weight per seat. The stretch never occurred to take advantage of this, and the A380's cost-per-seat is expected to be matched by the A350-1000 and 777-9. Economic aspects With a theoretical maximum seating capacity of 853 seats, which is not used by any airline, the Airbus A380 consumes 2.4 liters of kerosene per 100 passenger kilometers. This increases with a reduced seating capacity from 555 to 3.5 l/100 pkm and is 5.2 liters of kerosene per 100 passenger kilometers in the smallest possible variant with only 362 seats. Secondary As of 2015, several airlines expressed their interest in selling their aircraft, partially coinciding with expiring lease contracts for the aircraft. Several in-service A380s were offered for lease to other airlines. The suggestion prompted concerns on the potential for new sales for Airbus, although these were dismissed by Airbus COO John Leahy who stated that "Used A380s do not compete with new A380s", noting that the second-hand market is more interesting for parties otherwise looking to buy smaller aircraft such as the Boeing 777. After Malaysia Airlines was unable to sell or lease its six A380s, it decided to refurbish the aircraft with seating for 700 and transfer them to a subsidiary carrier for religious pilgrimage flights. As it started receiving its six A350s to replace its A380s in December 2017, the new subsidiary will serve the Hajj and Umrah market with them, starting in the third quarter of 2018 and could be expanded above six beyond 2020 to 2022. The cabin will have 36 business seats and 600 economy seats, with a 712-seat reconfiguration possible within five days. The fleet could be chartered half the year for the tourism industry like cruise shipping and will be able to operate for the next 40 years if oil prices stay low. As they should be parked by June 2018 before reconfiguration, MAS confirmed the plans and will also use them for peak periods to high traffic markets like London. In August 2017, it was announced that Hi Fly would lease two used aircraft. The Portuguese ACMI/charter airline will use the aircraft for markets where high capacity is needed and airports where slots are scarce. The first aircraft was scheduled to begin commercial operations during the first quarter of 2018 Hi Fly was to receive its A380s from mid 2018 in a 471-seat configuration: 399 on the main deck, 60 business-class and 12 first-class seats on the upper deck, the Singapore Airlines layout. Hi Fly first used one of their A380s on 1 August 2018 for a one-off flight to enable Thomas Cook Airlines to repatriate passengers from Rhodes to Copenhagen following IT problems in the Greek airport. The same aircraft was then wet-leased to Norwegian to operate its evening London-New York service for several weeks in August 2018, to alleviate availability issues on its Boeing 787s affected by Trent 1000 engine problems; Air Austral also signed a deal to wet-lease an A380 from Hi Fly while one of its 787s is grounded for three months of Trent 1000 inspections. As of December 2019, Hi Fly has leased one used A380. Amedeo, mainly an A380 lessor and the largest with 22, mostly leased to Emirates, wants to find a use for them after their lease expires from 2022, and study if there is a demand to wet lease them. Swiss aircraft broker Sparfell & Partners plans to convert for head-of-state or VVIP transport some of Dr. Peters' four ex-SIA A380s for under $300 million apiece, less than a new Boeing 777 or Airbus A330. As of November 2018, Air France was planning to return five of its A380s to lessors by the end of 2019 and refurbish its other five with new interiors by 2020 for $51 million per aircraft. By July 2019, Air France revised this plan and intended to phase out all ten of its A380s by 2022 as part of an "accelerated" retirement plan, replacing them with no more than nine twin-engined wide-body aircraft. The A330-900, A350-900 and 787-9 were being evaluated as potential replacements. Following the cancellation of the programme in February 2019, the residual value of existing aircraft is in doubt. While Amedeo argued that cancellation should benefit the value, this will depend on whether any new airlines are prepared to adopt second-hand A380s, and how many existing users continue to operate the aircraft. Even the teardown value is questionable, in that the engines, usually the most valuable part of a scrap aircraft, are not used by any other models. Teardown and second-hand market With four A380s leased to Singapore Airlines having been returned between October 2017 and March 2018, Dr. Peters feared a weak aftermarket and is considering scrapping them, although they are on sale for a business jet conversion, but on the other hand Airbus sees a potential for African airlines and Chinese airlines, Hajj charters and its large Gulf operators. An A380 parted out may be worth $30 million to $50 million if it is at half-life. Teardown specialists have declined offers for several aircraft at part-out prices due to high risk as a secondary market is uncertain with $30 to $40 million for the refurbishment, but should be between $20 and $30 million to be viable. When the aircraft were proposed to British Airways, Hi Fly and Iran Air, BA did not want to replace its Boeing 747s until 2021, while Iran Air faced political uncertainty and Hi Fly did not have a convincing business case. Consequently, Dr. Peters recommended to its investors on 28 June 2018 to sell the aircraft parts with VAS Aero Services within two years for US$45 million, quickly for components like the landing gear or the APU. Rolls-Royce Trent 900 leasing beyond March 2019 should generate US$480,000 monthly for each aircraft before selling the turbofans by 2020. With a total revenue of US$80 million per aircraft, the overall return expected is 145–155% while 72% and 81% of their debt had already been repaid. The fifth plane coming back from SIA, owned by Doric, has been leased by Hi Fly Malta with a lease period of "nearly 6 years". Hi Fly Malta became the first operator of second-hand A380 (MSN006). Norwegian Long Haul briefly leased Hi Fly Malta A380 in August 2018, which operated the aircraft following engine problems with their Dreamliner fleet. Norwegian leased the A380 again in late 2018 to help deal with the passenger backlog as a result of the Gatwick Airport drone incident. Two others returned from Singapore Airlines in the coming weeks (June 2018) but they could stay with an existing Asian A380 flag carrier. The teardown value includes $32–$33 million from the engines in 2020 and $4 million from leasing them until then, while the value of a 2008 A380 would be $78.4 million in 2020 and its monthly lease in 2018 would be $929,000. The two aircraft have returned 3.8–4.2% per year since 2008 but the 145–155% return is lower than the 220% originally forecast. Of the nearly 500 made, 50 747-400s were sold in the secondary market, including only 25 to new customers. These are among the first A380s delivered, lacking the improvements and weight savings of later ones. The first two A380s delivered to Singapore Airlines (MSN003 and MSN005) flew to Tarbes, France, to be scrapped. Their engines and some components had been dismantled and removed while the livery was painted over in white. As of September 2019, Emirates initiated its A380 retirement planwhich will see the type remain in service until at least 2035by retiring two aircraft that were due for a major overhaul, and using them as parts donors for the rest of the fleet. Emirates does not see any demand in the second-hand market, but is indifferent in that the retired aircraft have already been fully written down and thus have no residual value. As further aircraft are retired, Emirates-owned airframes will continue to be used for parts, while leased airframes will be returned to the lessors. One such return to lessor Doric was purchased by Emirates for £25.3 million in late 2022, as spare parts. Orders and deliveries Fourteen customers have ordered and taken delivery of the A380 as of April 2019. Total orders for the A380 stand at 251 . The biggest customer is Emirates, which has committed to order a total of 123 A380s as of 14 February 2019. One VIP order was made in 2007 but later cancelled by Airbus. The A380F version attracted 27 orders, before they were either cancelled (20) or converted to A380-800 (7) following the production delay and the subsequent suspension of the freighter programme. Delivery takes place in Hamburg for customers from Europe and the Middle East and in Toulouse for customers from the rest of the world. EADS explained that deliveries in 2013 were to be slowed temporarily to accommodate replacement of the wing rib brackets where cracks were detected earlier in the existing fleet. In 2013, in expectation of raising the number of orders placed, Airbus announced "attractable discounts" to airlines who placed large orders for the A380. Soon after, at the November 2013 Dubai Air Show, Emirates ordered 150 777X and Etihad Airways ordered 50 aircraft, totalling $20 billion. In late July 2014, Airbus announced that it had terminated five A380 firm orders from the Japanese low-cost carrier, Skymark Airlines, citing concerns over the airline's financial performance. In 2016, the largest Japanese carrier, All Nippon Airways (ANA), took over three of the orders and the remaining two that were already produced and put into long-term storage were taken up later by the main customer, Emirates. Qantas planned to order eight more aircraft but froze its order while the airline restructured its operations. Qantas eventually cancelled its order in February 2019 amid doubts over the A380's future. Amedeo, an aircraft lessor that ordered 20 A380s, had not found a client for the airliner and eventually cancelled their order in 2019. Virgin Atlantic ordered six A380s in 2001 but never took delivery and later cancelled them in 2018. In June 2017, Emirates had 48 orders outstanding, but due to lack of space in Dubai Airport, it deferred 12 deliveries by one year and would not take any in 2019–20 before replacing its early airliners from 2021. There were open production slots in 2019, and Airbus reduced its production rate in 2017–2018 at 12 per year. The real backlog is much smaller than the official 107 with 47 uncertain orders: 20 commitments for the A380-specialized lessor Amedeo which commits to production only once aircraft are placed, eight for Qantas which wants to keep its fleet at 12, six for Virgin Atlantic which does not want them any more and three ex Transaero for finance vehicle Air Accord. At its 100th delivery ceremony, Emirates CEO Ahmed bin Saeed Al Maktoum was hoping to order new A380s at the November 2017 Dubai Air Show. Emirates does not need the small front staircase and eleven-abreast economy of the A380plus concept, but wants Airbus to commit to continue production for at least 10 years. On 18 January 2018, Airbus secured a preliminary agreement from Emirates for up to 36 A380s, to be delivered from 2020, valued at $16 billion at list prices. The contract was signed in February 2018, comprising a firm order for 20 A380s and options on 16 more. In early 2019, Airbus confirmed it was in discussions with Emirates over its A380 contract. If the A380's only stable client were to drop the type, Airbus could cease production of the superjumbo. Emirates was at odds with Rolls-Royce over shortfalls in fuel savings from the Trent 900s, and could switch its order for 36 A380s to the smaller A350. The A350 could also replace its provisional order for 40 Boeing 787-10s, placed in 2017, as engine margins on the 787 are insufficient for the hot Dubai weather. On 14 February 2019, Emirates decided to cancel its order for 39 planes, opting to replace them with A350s and A330neos. Airbus stated that this cancellation would bring the A380's production to an end when the last unfilled orders are delivered in 2021. On 21 March 2019, All Nippon Airways received its first of three A380s painted with the Sea Turtle livery. Called the ANA Blue, this A380 will be used for 3 flights a week, going from Tokyo to Honolulu and back. In October 2021, Emirates announced it would receive its final three A380s to be delivered with the last aircraft in December 2021, thus ending production of the A380. Timeline Cumulative orders and deliveries Data as of December 2021. <noinclude> Operators There were 234 aircraft (of 251 delivered) in service with 12 operators , with Emirates being the largest operator with 121 A380s in its fleet. Current operators All Nippon Airways – 3 currently operated, in service since 24 May 2019 Asiana Airlines – 6 currently operated, in service since 13 June 2014. To be retired in 2026. British Airways – 12 currently operated, in service since 2 August 2013 Emirates – 123 currently operated, in service since 1 August 2008. Plan to be retired by 2038. Etihad Airways – 4 currently operated, in service since 27 December 2014 Korean Air – 9 currently operated, in service since 17 June 2011. To be retired in 2026. Lufthansa – 4 currently operated, in service since 10 June 2011. To be retired after 2030. Qantas – 10 currently operated, in service since 20 October 2008. To be retired from 2032. Qatar Airways – 10 currently operated, in service since 10 October 2014. To be retired. Singapore Airlines – 12 currently operated, in service since 25 October 2007 Former operators The following airlines operated A380 aircraft and have since phased them out: Air France – 10 operated from 2009 to 2020, retired early due to COVID-19 China Southern Airlines – 5 operated from 2011 to 2022, retired due to high operation costs Hi Fly Malta – 1 operated from 2018 to 2020, retired early due to COVID-19 Malaysia Airlines – 6 operated from 2012 to 2020, retired due to high operation costs Thai Airways – 6 operated from 2012 to 2020, retired due to restructuring efforts Future operators Global Airlines plans to operate a fleet of four second-hand A380s. Aircraft on display The fourth test A380 (MSN4) was donated to the Musée de l'air et de l'espace at Le Bourget in 2017. After several months of restoration, it was put on display on the apron in 2018, near the museum's Boeing 747-100, making the museum the first in the world where both large airliners can be seen together. Donated by Airbus at the same time as A380 MSN4, the second test A380 (MSN2), was donated to the Aeroscopia museum at Toulouse-Blagnac Airport, Toulouse, along with the first Airbus A320 and an Airbus A340, that had also previously been used by the company for test flights. Incidents The A380 has never been involved in a hull-loss accident , but was involved in two notable aviation accidents without any injuries, both of which were caused by uncontained engine failures: On 4 November 2010, Qantas Flight 32, en route from Singapore Changi Airport to Sydney Airport, suffered an uncontained engine failure, resulting in a series of related problems, and forcing the flight to make an emergency landing. The plane safely returned to Singapore. There were no injuries to the passengers, the crew, or people on the ground despite debris falling onto the Indonesian island of Batam. The damage to the aircraft was sufficient for the event to be classified as an accident. Qantas subsequently grounded all of its A380s that day subject to an internal investigation taken in conjunction with the engine manufacturer Rolls-Royce plc. A380s powered by the Rolls-Royce Trent 900 engines were affected, while those powered by the Engine Alliance GP7000 were not. Investigators determined that an oil leak, caused by a defective oil supply pipe, led to an engine fire and subsequent uncontained engine failure. Repairs cost an estimated 139 million (~US$145M). As other Rolls-Royce Trent 900 engines also showed problems with the same oil leak, Rolls-Royce ordered many engines to be changed, including about half of the engines in the Qantas A380 fleet. During the aeroplane's repair, cracks were discovered in wing structural fittings, which also resulted in mandatory inspections of all A380s and subsequent design changes. On 30 September 2017, Air France Flight 66, an Engine Alliance GP7270 powered Airbus A380, suffered an apparent uncontained engine failure while operating from Paris Charles de Gaulle Airport to Los Angeles International Airport. The aircraft safely diverted to CFB Goose Bay, Canada. Specifications (A380-800, Trent engines) Aircraft Type Designations
Technology
Specific aircraft_2
null
181178
https://en.wikipedia.org/wiki/Acer%20rubrum
Acer rubrum
Acer rubrum, the red maple, also known as swamp maple, water maple, or soft maple, is one of the most common and widespread deciduous trees of eastern and central North America. The U.S. Forest Service recognizes it as the most abundant native tree in eastern North America. The red maple ranges from southeastern Manitoba around the Lake of the Woods on the border with Ontario and Minnesota, east to Newfoundland, south to Florida, and southwest to East Texas. Many of its features, especially its leaves, are quite variable in form. At maturity, it often attains a height around . Its flowers, petioles, twigs, and seeds are all red to varying degrees. Among these features, however, it is best known for its brilliant deep scarlet foliage in autumn. Over most of its range, red maple is adaptable to a very wide range of site conditions, perhaps more so than any other tree in eastern North America. It can be found growing in swamps, on poor, dry soils, and almost anywhere in between. It grows well from sea level to about . Due to its attractive fall foliage and pleasing form, it is often used as a shade tree for landscapes. It is used commercially on a small scale for maple syrup production and for its medium to high quality lumber. It is also the state tree of Rhode Island. The red maple can be considered weedy or even invasive in young, highly disturbed forests, especially frequently logged forests. In a mature or old-growth northern hardwood forest, red maple only has a sparse presence, while shade-tolerant trees such as sugar maples, beeches, and hemlocks thrive. By removing red maple from a young forest recovering from disturbance, the natural cycle of forest regeneration is altered, changing the diversity of the forest for centuries to come. Description Though A. rubrum is sometimes easy to identify, it is highly changeable in morphological characteristics. It is a medium to large sized tree, reaching heights of and exceptionally over in the southern Appalachians where conditions favor its growth. The leaves are usually long on a full-grown tree. The trunk diameter often ranges from ; depending on the growing conditions, however, open-grown trees can attain diameters of up to . The trunk remains free of branches until some distance up the tree on forest grown trees, while individuals grown in the open are shorter and thicker with a more rounded crown. Trees on poorer sites often become malformed and scraggly. Generally the crown is irregularly ovoid with ascending whip-like curved shoots. The bark is a pale grey and smooth when the individual is young. As the tree grows the bark becomes darker and cracks into slightly raised long plates. The largest known living red maple is located near Armada, Michigan, at a height of and a bole circumference, at breast height, of . The leaves of the red maple offer the easiest way to distinguish it from its relatives. As with all North American maple trees, they are deciduous and arranged oppositely on the twig. They are typically long and wide with three to five palmate lobes with a serrated margin. The sinuses are typically narrow, but the leaves can exhibit considerable variation. When five lobes are present, the three at the terminal end are larger than the other two near the base. In contrast, the leaves of the related silver maple, A. saccharinum, are much more deeply lobed, more sharply toothed, and characteristically have five lobes. The upper side of A. rubrums leaf is light green and the underside is whitish and can be either glaucous or hairy. The leaf stalks are usually red and are up to long. The leaves can turn a characteristic brilliant red in autumn, but can also become yellow or orange on some individuals. Soil acidity can influence the color of the foliage and trees with female flowers are more likely to produce orange coloration while male trees produce red. The fall colors of red maple are most spectacular in the northern part of its range where climates are cooler. The twigs of the red maple are reddish in color and somewhat shiny with small lenticels. Dwarf shoots are present on many branches. The buds are usually blunt and greenish to reddish in color, generally with several loose scales. The lateral buds are slightly stalked, and in addition, collateral buds may be present, as well. The buds form in fall and winter and are often visible from a distance due to their large size and reddish tint. The leaf scars on the twig are V-shaped and contain three bundle scars. The flowers are generally unisexual, with male and female flowers appearing in separate sessile clusters, though they are sometimes also bisexual. They appear in late winter to early spring, from December to May depending on elevation and latitude, usually before the leaves. The tree itself is considered polygamodioecious, meaning some individuals are male, some female, and some monoecious. Under the proper conditions, the tree can sometimes switch from male to female, male to hermaphroditic, and hermaphroditic to female. The red maple will begin blooming when it is about 8 years old, but it significantly varies between tree to tree: some trees may begin flowering when they are 4 years old. The flowers are red with 5 small petals and a 5-lobed calyx, usually at the twig tips. The staminate flowers are sessile. The pistillate flowers are borne on pedicels that grow out while the flowers are blooming, so that eventually the flowers are in a hanging cluster with stems long. The petals are lineal to oblong in shape and are pubescent. The pistillate flowers have one pistil formed from two fused carpels with a glabrous superior ovary and two long styles that protrude beyond the perianth. The staminate flowers contain between 4 and 12 stamens, often with 8. The fruit is a schizocarp of 2 samaras, each one long. Prior to dehiscence, the wings of the fruit are somewhat divergent at an angle of 50 to 60°. They are borne on long slender pedicels and are variable in color from light brown to reddish. They ripen from April through early June, before even the leaf development is altogether complete. After they reach maturity, the seeds are dispersed for a 1- to 2-week period from April through July. Distribution and habitat Acer rubrum is one of the most abundant and widespread trees in eastern North America. It can be found from the south of Newfoundland, through Nova Scotia, New Brunswick, and southern Quebec to the southwest west of Ontario, extreme southeastern Manitoba and northern Minnesota; southward through Wisconsin, Illinois, Missouri, eastern Oklahoma, and eastern Texas in its western range; and east to Florida. It has the largest continuous range along the North American Atlantic Coast of any tree that occurs in Florida. In total it ranges from north to south. The species is native to all regions of the United States east of the 95th meridian. The tree's range ends where the mean minimum isotherm begins, namely in southeastern Canada. A. rubrum is not present in most of the Prairie Peninsula of the northern Midwest (although it is found in Ohio), the coastal prairie in southern Louisiana and southeastern Texas and the swamp prairie of the Florida Everglades. Red maple's western range stops with the Great Plains where conditions become too dry for it. The absence of red maple from the Prairie Peninsula is most likely due to the tree's poor tolerance of wildfires. Red maple is most abundant in the Northeastern US, the Upper Peninsula of Michigan, and northeastern Wisconsin, and is rare in the extreme west of its range and in the Southeastern US. In several other locations, the tree is absent from large areas but still present in a few specific habitats. An example is the Bluegrass region of Kentucky, where red maple is not found in the dominant open plains, but is present along streams. Here the red maple is not present in the bottom land forests of the Grain Belt, despite the fact it is common in similar habitats and species associations both to the north and south of this area. In the Northeastern US, red maple can be a climax forest species in certain locations, but will eventually give way to sugar maple. A. rubrum does very well in a wide range of soil types, with varying textures, moisture, pH, and elevation, probably more so than any other forest tree in North America. A. rubrum's high pH tolerance means that it can grow in a variety of places, and it is widespread along the Eastern United States. It grows on glaciated as well as unglaciated soils derived from granite, gneiss, schist, sandstone, shale, slate, conglomerate, quartzite, and limestone. Chlorosis can occur on very alkaline soils, though otherwise its pH tolerance is quite high. Moist mineral soil is best for germination of seeds. Red maple can grow in a variety of moist and dry biomes, from dry ridges and sunny, southwest-facing slopes to peat bogs and swamps. While many types of tree prefer a south- or north-facing aspect, the red maple does not appear to have a preference. Its ideal conditions are in moderately well-drained, moist sites at low or intermediate elevations. However, it is nonetheless common in mountainous areas on relatively dry ridges, as well as on both the south and west sides of upper slopes. Furthermore, it is common in swampy areas, along the banks of slow moving streams, as well as on poorly drained flats and depressions. In northern Michigan and New England, the tree is found on the tops of ridges, sandy or rocky upland and otherwise dry soils, as well as in nearly pure stands on moist soils and the edges of swamps. In the far south of its range, it is almost exclusively associated with swamps. Additionally, red maple is one of the most drought-tolerant species of maple in the Carolinas. Red maple is far more abundant today than when Europeans first arrived in North America. It only contributed minimally to old-growth upland forests, and would only form same-species stands in riparian zones. The density of the tree in many of these areas has increased six- to seven-fold, and this trend seems to be continuing, all of which is due to human factors, mainly loss of forest management by Native Americans who managed the forests to enhance acorn production and oak tree growth. This loss of management has been further enhanced by continued heavy logging and a recent trend of young, shrubby forests recovering from past human disturbances. Also, the decline of American elm and American chestnut due to introduced diseases has contributed to its spread. Red maple dominates such sites, but largely disappears until it only has a sparse presence by the time a forest is mature. This species is in fact a vital part of forest regeneration in the same way that paper birch is. Because it can grow on a variety of substrates, has a high pH tolerance, and grows in both shade and sun, A. rubrum is a prolific seed producer and highly adaptable, often dominating disturbed sites. While many believe that it is replacing historically dominant tree species in the Eastern United States, such as sugar maples, beeches, oaks, hemlocks and pines, red maple will only dominate young forests prone to natural or human disturbance. In areas disturbed by humans where the species thrives, it can reduce diversity, but in a mature forest, it is not a dominant species; it only has a sparse presence and adds to the diversity and ecological structure of a forest. Extensive use of red maple in landscaping has also contributed to the surge in the species' numbers as volunteer seedlings proliferate. Finally, disease epidemics have greatly reduced the population of elms and chestnuts in the forests of the US. While mainline forest trees continue to dominate mesic sites with rich soil, more marginal areas are increasingly being dominated by red maple. Ecology Red maple's maximum lifespan is 150 years, but most live less than 100 years. The tree's thin bark is easily damaged from ice and storms, animals, and when used in landscaping, being struck by flying debris from lawn mowers, allowing fungi to penetrate and cause heart rot. Its ability to thrive in a large number of habitats is largely due to its ability to produce roots to suit its site from a young age. In wet locations, red maple seedlings produce short taproots with long, well-developed lateral roots; while on dry sites, they develop long taproots with significantly shorter laterals. The roots are primarily horizontal, however, forming in the upper of the ground. Mature trees have woody roots up to long. They are very tolerant of flooding, with one study showing that 60 days of flooding caused no leaf damage. At the same time, they are tolerant of drought due to their ability to stop growing under dry conditions by then producing a second-growth flush when conditions later improve, even if growth has stopped for 2 weeks. A. rubrum is one of the first plants to flower in spring. A crop of seeds is generally produced every year with a bumper crop often occurring every second year. A single tree between in diameter can produce between 12,000 and 91,000 seeds in a season. A tree in diameter was shown to produce nearly a million seeds. Red maple produces one of the smallest seeds of any of the maples. Fertilization has also been shown to significantly increase the seed yield for up to two years after application. The seeds are epigeal and tend to germinate in early summer soon after they are released, assuming a small amount of light, moisture, and sufficient temperatures are present. If the seeds are densely shaded, then germination commonly does not occur until the next spring. Most seedlings do not survive in closed forest canopy situations. However, one- to four-year-old seedlings are common under dense canopy. Though they eventually die if no light reaches them, they serve as a reservoir, waiting to fill any open area of the canopy above. Trees growing in a Zone 9 or 10 area such as Florida will usually die from cold damage if transferred up north, for instance to Canada, Maine, Vermont, New Hampshire and New York, even if the southern trees were planted with northern red maples. Due to their wide range, genetically the trees have adapted to the climatic differences. Red maple is able to increase its numbers significantly when associate trees are damaged by disease, cutting, or fire. One study found that 6 years after clearcutting a Oak-Hickory forest containing no red maples, the plot contained more than 2,200 red maple seedlings per hectare (900 per acre) taller than . One of its associates, the black cherry (Prunus serotina), contains benzoic acid, which has been shown to be a potential allelopathic inhibitor of red maple growth. Red maple is one of the first species to start stem elongation. In one study, stem elongation was one-half completed in 1 week, after which growth slowed and was 90% completed within only 54 days. In good light and moisture conditions, the seedlings can grow in their first year and up to each year for the next few years, making it a fast grower. The red maple is used as a food source by several forms of wildlife. Elk and white-tailed deer in particular use the current season's growth of red maple as an important source of winter food. Several Lepidoptera (butterflies and moths) utilize the leaves as food, including larvae of the rosy maple moth (Dryocampa rubicunda); see List of Lepidoptera that feed on maples. Due to A. rubrums very wide range, there is significant variation in hardiness, size, form, time of flushing, onset of dormancy, and other traits. Generally speaking, individuals from the north flush the earliest, have the most reddish fall color, set their buds the earliest and take the least winter injury. Seedlings are tallest in the north-central and east-central part of the range. In Florida, at the extreme south of the red maple's range, it is limited exclusively to swamplands. The fruits also vary geographically with northern individuals in areas with brief, frost-free periods producing fruits that are shorter and heavier than their southern counterparts. As a result of such variation, there is much genetic potential for breeding programs with a goal of producing red maples for cultivation. This is especially useful for making urban cultivars that require resistance from verticillium wilt, air pollution, and drought. Red maple frequently hybridizes with silver maple; the hybrid, known as Freeman's maple, Acer × freemanii, is intermediate between the parents. Allergenic potential The allergenic potential of red maples varies widely based on the cultivar. The following cultivars are completely male and are highly allergenic, with an OPALS allergy scale rating of 8 or higher: 'Autumn Flame' ('Flame') 'Autumn Spire' 'Columnare' ('Pyramidale') 'Firedance' ('Landsburg') 'Karpick' 'Northwood' 'October Brilliance' 'Sun Valley' 'Tiliford' The following cultivars have an OPALS allergy scale rating of 3 or lower; they are completely female trees, and have low potential for causing allergies: 'Autumn Glory' 'Bowhall' 'Davey Red' 'Doric' 'Embers' 'Festival' 'October Glory' 'Red Skin' 'Red Sunset' ('Franksred') Toxicity The leaves of red maple, especially when dead or wilted, are extremely toxic to horses. The toxin is unknown, but believed to be an oxidant because it damages red blood cells, causing acute oxidative hemolysis that inhibits the transport of oxygen. This not only decreases oxygen delivery to all tissues, but also leads to the production of methemoglobin, which can further damage the kidneys. The ingestion of 700 grams (1.5 pounds) of leaves is considered toxic and 1.4 kilograms (3 pounds) is lethal. Symptoms occur within one or two days after ingestion and can include depression, lethargy, increased rate and depth of breathing, increased heart rate, jaundice, dark brown urine, colic, laminitis, coma, and death. Treatment is limited and can include the use of methylene blue or mineral oil and activated carbon in order to stop further absorption of the toxin into the stomach, as well as blood transfusions, fluid support, diuretics, and anti-oxidants such as Vitamin C. About 50% to 75% of affected horses die or are euthanized as a result. Cultivation Red maple's rapid growth, ease of transplanting, attractive form, and value for wildlife (in the eastern US) has made it one of the most extensively planted trees. In parts of the Pacific Northwest, it is one of the most common introduced trees. Its popularity in cultivation stems from its vigorous habit, its attractive and early red flowers, and most importantly, its flaming red fall foliage. The tree was introduced into the United Kingdom in 1656 and shortly thereafter entered cultivation. There it is frequently found in many parks and yards. Red maple is a good choice of a tree for urban areas when there is ample room for its root system. Forming an association with Arbuscular Mycorrhizal Fungi can help A. rubrum grow along city streets. It is more tolerant of pollution and road salt than sugar maples, although the tree's fall foliage is not as vibrant in this environment. Like several other maples, its low root system can be invasive and it makes a poor choice for plantings near paving. It attracts squirrels, who eat its buds in the early spring, although squirrels prefer the larger buds of the silver maple. Red maples make vibrant and colorful bonsai, and have year around attractive features for display. Cultivars Numerous cultivars have been selected, often for intensity of fall color, with 'October Glory' and 'Red Sunset' among the most popular. Toward its southern limit, 'Fireburst', 'Florida Flame', and 'Gulf Ember' are preferred. Many cultivars of the Freeman maple are also grown widely. Below is a partial list of cultivars: 'Armstrong' – Columnar to fastigate in shape with silvery bark and modest orange to red fall foliage. 'Autumn Blaze' – Rounded oval form with leaves that resemble the silver maple. The fall color is orange red and persists longer than usual. 'Autumn Flame' – A fast grower with exceptional bright red fall color developing early. The leaves are also smaller than the species. 'Autumn Radiance' – Dense oval crown with an orange-red fall color. 'Autumn Spire' – Broad columnar crown; red fall color; very hardy. 'Bowhall' – Conical to upright in form with a yellow-red fall color. 'Burgundy Bell' – Compact rounded uniform shape with long lasting, burgundy fall leaves. 'Columnare' – An old cultivar growing to with a narrow columnar to pyramidal form with dark green leaves turning orange and deep red in fall. 'Gerling' – A compact, slow growing selection, this individual only reaches and has orange-red fall foliage. 'Northwood' – Branches are at a 45 degree angle to the trunk, forming a rounded oval crown. Though the foliage is deep green in summer, its orange-red fall color is not as impressive as other cultivars. 'October Brilliance' – This selection is slow to leaf in spring, but has a tight crown and deep red fall color. 'October Glory' – Has a rounded oval crown with late developing intense red fall foliage. Along with 'Red Sunset', it is the most popular selection due to the dependable fall color and vigorous growth. This cultivar has gained the Royal Horticultural Society's Award of Garden Merit. 'Redpointe' – Superior in alkaline soil, strong central leader, red fall color. 'Red Sunset' – is also a recipient of the Award of Garden Merit. The other very popular choice, this selection does well in heat due to its drought tolerance and has an upright habit. It has very attractive orange-red fall color and is also a rapid and vigorous grower. 'Scarlet Sentinel' – A columnar to oval selection with 5-lobed leaves resembling the silver maple. The fall color is yellow-orange to orange-red and the tree is a fast grower. 'Schlesingeri' – A tree with a broad crown and early, long lasting fall color that is a deep red to reddish purple. Growth is also quite rapid. The original tree grew at the home of Barthold Schlesinger in Brookline, Massachusetts. 'Shade King' – This fast growing cultivar has an upright-oval form with deep green summer leaves that turn red to orange in fall. 'V.J. Drake' – This selection is notable because the edges of the leaves first turn a deep red before the color progresses into the center. Other uses In the lumber industry Acer rubrum is considered a "soft maple", a designation it shares, commercially, with silver maple (A. saccharinum). In this context, the term "soft" is more comparative, than descriptive; i.e., "soft maple", while softer than its harder cousin, sugar maple (A. saccharum), is still a fairly hard wood, being comparable to black cherry (Prunus serotina) in this regard. Like A. saccharum, the wood of red maple is close-grained, but its texture is softer, less dense, and has not as desirable an appearance, particularly under a clear finish. However, the wood from Acer rubrum while being typically less expensive than hard maple, also has greater dimensional stability than that of A. saccharum, and also machines and stains easier. Thus, high grades of wood from the red maple can be substituted for hard maple, particularly when it comes to making stain/paint-grade furniture. Red maple lumber also contains a greater percentage of "curly" (aka "flame"/"fiddleback") figure, which is prized by musical instrument/custom furniture makers, as well as the veneer industry. As a soft maple, the wood tends to shrink more during the drying process than with the hard maples. Red maple is also used for the production of maple syrup, though the hard maples Acer saccharum (sugar maple) and Acer nigrum (black maple) are more commonly utilized. One study compared the sap and syrup from the sugar maple with those of the red maple, as well as those of the Acer saccharinum (silver maple), Acer negundo (boxelder), and Acer platanoides (Norway maple), and all were found to be equal in sweetness, flavor, and quality. However, the buds of red maple and other soft maples emerge much earlier in the spring than the sugar maple, and after sprouting chemical makeup of the sap changes, imparting an undesirable flavor to the syrup. This being the case, red maple can only be tapped for syrup before the buds emerge, making the season very short. Native Americans used red maple bark as a wash for inflamed eyes and cataracts, and as a remedy for hives and muscular aches. They also would brew tea from the inner bark to treat coughs and diarrhea. Pioneers made cinnamon-brown and black dyes from a bark extract, and iron sulphate could be added to the tannin from red maple bark in order to make ink. Red maple is a medium quality firewood, possessing less heat energy, nominally , than other hardwoods such as ash: , oak: , or birch: .
Biology and health sciences
Sapindales
Plants
181193
https://en.wikipedia.org/wiki/Quercus%20velutina
Quercus velutina
Quercus velutina (Latin 'velutina', "velvety") , the black oak, is a species of oak in the red oak group (Quercus sect. Lobatae), native and widespread in eastern and central North America. It is sometimes called the eastern black oak. Quercus velutina was previously known as yellow oak due to the yellow pigment in its inner bark. It is a close relative of the California black oak (Quercus kelloggii) found in western North America. Description In the northern part of its range, Quercus velutina is a relatively small tree, reaching a height of and a diameter of , but it grows larger in the south and center of its range, where heights of up to are known. The leaves of the black oak are alternately arranged on the twig and are long with 5–7 bristle-tipped lobes separated by deep U-shaped notches. The upper surface of the leaf is a shiny deep green, and the lower is yellowish-brown. There are also stellate hairs on the underside of the leaf that grow in clumps. Some key characteristics for identification include that leaves grown in the sun have very deep U-shaped sinuses and that the buds are velvety and covered in white hairs. Black oak is monoecious. The staminate flowers develop from leaf axils of the previous year and the catkins emerge before or at the same time as the current leaves in April or May. The pistillate flowers are borne in the axils of the current year's leaves and may be solitary or occur in two- to many-flowered spikes. The fruit, an acorn that occurs singly or in clusters of two to five, is about one-third enclosed in a scaly cup and matures in 2 years. Black oak acorns are brown when mature and ripen from late August to late October, depending on geographic location. The fruits or acorns of the black oak are medium-sized and broadly rounded. The cap is large and covers almost half of the nut. Habitat and distribution Black oak is found in all the coastal states from Maine to Texas, inland as far as Michigan, Ontario, Minnesota, Nebraska, Kansas, Oklahoma, and eastern Texas. It grows on all aspects and slope positions. It grows best in coves and on middle and lower slopes with northerly and easterly aspects. It is found at elevations up to in the southern Appalachians. In southern New England, black oak grows on cool, moist soils. Elsewhere it occurs on warm, moist soils. The most widespread soils on which black oak grows are the udalfs and udolls. These soils are derived from glacial materials, sandstones, shales, and limestone and range from heavy clays to loamy sands with some having a high content of rock or chert fragments. Black oak grows best on well drained, silty clay to loam soils. The most important factors determining site quality for black oak are the thickness and texture of the A horizon, texture of the B horizon, aspect, and slope position. Other factors may be important in localized areas. For example, in northwestern West Virginia increasing precipitation to resulted in increased site quality; more than had no further effect. In southern Indiana, decreasing site quality was associated with increasing slope steepness. Near the limits of its range, topographic factors may restrict its distribution. At the western limits black oak is often found only on north and east aspects where moisture conditions are most favorable. In southern Minnesota and Wisconsin it is usually found only on ridge tops and the lower two-thirds of south- and west-facing slopes. Ecology Associated plant species Common tree associates of black oak are white oak (Quercus alba), northern red oak (Quercus rubra), pignut hickory (Carya glabra), mockernut hickory (C. tomentosa), bitternut hickory (C. cordiformis), and shagbark hickory (C. ovata); American elm (Ulmus americana) and slippery elm (U. rubra); white ash (Fraxinus americana); black walnut (Juglans nigra) and butternut (J. cinerea); scarlet oak (Quercus coccinea), southern red oak (Q. falcata), and chinkapin oak (Q. muehlenbergii); red maple (Acer rubrum) and sugar maple (A. saccharum); black cherry (Prunus serotina); and blackgum (Nyssa sylvatica). Common small tree associates of black oak include flowering dogwood (Cornus florida), sourwood (Oxydendrum arboreum), sassafras (Sassafras albidum), eastern hophornbeam (Ostrya virginiana), redbud (Cercis canadensis), pawpaw (Asimina triloba), downy serviceberry (Amelanchier arborea), and American bladdernut (Staphylea trifolia). Common shrubs include Vaccinium spp., mountain-laurel (Kalmia latifolia), witch-hazel (Hamamelis virginiana), beaked hazel (Corylus cornuta), spicebush (Lindera benzoin), sumac (Rhus spp.), and Viburnum spp. The most common vines are greenbrier (Smilax spp.), grape (Vitis spp.), poison-ivy (Toxicodendron radicans), and Virginia creeper (Parthenocissus quinquefolia). Black oak is often a predominant species in the canopy of an oak–heath forest. Seed production and dissemination In forest stands, black oak begins to produce seeds at about age 20 and reaches optimum production at 40 to 75 years. It is a consistent seed producer with good crops of acorns every 2 to 3 years. In Missouri, the average number of mature acorns per tree was generally higher than for other oaks over a 5-year period, but the number of acorns differed greatly from year to year and from tree to tree within the same stand. The number of seeds that become available for regenerating black oak may be low even in good seed years. Insects, squirrels, deer, turkey, small rodents, and birds consume many acorns. They can eat or damage a high percentage of the acorn crop in most years and essentially all of it in poor seed years. Black oak acorns from a single tree are dispersed over a limited area by squirrels, mice, and gravity. The blue jay may disperse over longer distances. Response to competition Black oak is classed as intermediate in tolerance to shade. It is less tolerant than many of its associates such as white and chestnut oaks, hickories, beech (Fagus grandifolia), maples, elm, and blackgum. However, it is more tolerant than yellow-poplar (Liriodendron tulipifera), black cherry, and shortleaf pine (Pinus echinata). It is about the same as northern red oak and scarlet oak. Seedlings usually die within a few years after being established under fully stocked over stories. Most black oak sprouts under mature stands develop crooked stems and flat-topped or misshapen crowns. After the over story is removed, only the large stems are capable of competing successfully. Seedlings are soon overtopped. The few that survive usually remain in the intermediate crown class. Even-aged silvicultural systems satisfy the reproduction and growth requirements of black oak better than the all-aged or uneven-aged selection system. Under the selection system, black oak is unable to reproduce because of inadequate light. Stands containing black oak that are managed under the selection system will gradually be dominated by more shade-tolerant species. Dormant buds are numerous on the boles of black oak trees. These buds may be stimulated to sprout and produce branches by mechanical pruning or by exposure to greatly increased light, as by thinning heavily or creating openings in the stand. Dominant trees are less likely to produce epicormic branches than those in the lower crown class. Damaging agents Wildfires seriously damage black oak trees by killing the cambium at the base of the trees. This creates an entry point for decay fungi. The result is loss of volume because of heart rot. Trees up to pole size are easily killed by fire and severe fires may even kill saw timber. Many of the killed trees sprout and form a new stand. However, the economic loss may be large unless at least some of it can be salvaged. Oak wilt (Bretziella fagacearum) is a potentially serious vascular disease of black oak that is widespread throughout the eastern United States. Trees die within a few weeks after the symptoms first appear. Usually scattered individuals or small groups of trees are killed, but areas several hectares in size may be affected. The disease is spread from tree to tree through root grafts and over larger distances by sap-feeding beetles (Nitidulidae) and the small oak bark beetle. Shoestring root rot (Armillaria mellea) attacks black oak and may kill trees weakened by fire, lightning, drought, insects, or other diseases. A root rot, Phytophthora cinnamomi, may kill seedlings in the nursery. Cankers caused by Strumella and Nectria species damage the holes of black oak but seldom kill trees. Foliage diseases that attack black oak are the same as those that typically attack species in the red oak group and include anthracnose (Gnomonia quercina), leaf blister (Taphrina spp.), powdery mildews (Phyllactinia corylea and Microsphaera alni), oak-pine rusts (Cronartium spp.), and leaf spots (Actinopelte dryina). Tunneling insects that attack the boles of black oak and cause serious lumber degrade include the carpenter worm (Prionoxystus robiniae), red oak borer (Enaphalodes rufulus), the twolined chestnut borer (Agrilus bilineatus), the oak timber worm (Arrenodes minutus), and the Columbian timber beetle (Corthylus columbianus). The gypsy moth (Lymantria dispar) feeds on foliage and is potentially the most destructive insect. Although black oaks withstood a single defoliation, two or three defoliations in successive years kill many trees. Other defoliators that attack black oak and may occasionally be epidemic are the variable oak leaf caterpillar (Heterocampa manteo), the orange striped oakworm (Anisota senatoria), and the brown tail moth (Euproctis chrysorrhoea). The nut weevils (Curculio spp.), gall-forming cynipids (Callirhytis spp.), filbertworm (Melissopus latiferreanus), and acorn moth (Valentinia glandulella) damage black oak acorns. Named hybrids involving black oak Black oak is well known to readily hybridize with other members of the red oak (Quercus sect. Lobatae) group, being one parent in at least a dozen different named hybrids. Quercus × bushii (Quercus marilandica × Q. velutina) – Bush's oak Quercus × cocksii (Quercus laurifolia × Q. velutina) – Cocks' oak Quercus × demarei (Quercus nigra × Q. velutina) Quercus × discreta (Quercus shumardii × Q. velutina) Quercus × filialis (Quercus phellos × Q. velutina) Quercus × fontana (Quercus coccinea × Q. velutina) Quercus × hawkinsiae (Quercus rubra × Q. velutina) – Hawkin's oak Quercus × leana (Quercus imbricaria × Q. velutina) – Lea's oak Quercus × palaeolithicola (Quercus ellipsoidalis × Q. velutina) Quercus × podophylla (Quercus incana × Q. velutina) Quercus × rehderi (Quercus ilicifolia × Q. velutina) – Rehder's oak Quercus × vaga (Quercus palustris × Q. velutina) Quercus × willdenowiana (Quercus falcata × Q. velutina) – Willdenow's oak Uses The inner bark of the black oak contains a yellow-orange coloring from the pigment quercitron, which was sold commercially in Europe until the 1940s, and lending the species its former common name of yellow oak.
Biology and health sciences
Fagales
Plants
181226
https://en.wikipedia.org/wiki/Optometry
Optometry
Optometry is a specialized health care profession that involves examining the eyes and related structures for defects or abnormalities. Optometrists are health care professionals who typically provide comprehensive eye care. In the United States of America and Canada, optometrists are those that hold a 4-year Doctor of Optometry degree, which is earned following their undergraduate college training. They are trained and licensed to practice medicine for eye related conditions, in addition to providing refractive (optical) eye care. Within their scope of practice, optometrists are considered physicians and bill medical insurance(s) (example: Medicare) accordingly. In the United Kingdom, optometrists may also provide medical care (e.g. prescribe medications and perform various surgeries) for eye-related conditions in addition to providing refractive care. The Doctor of Optometry degree is rarer in the UK. Many optometrists participate in academic research for eye-related conditions and diseases. In addition to prescribing glasses and contact lenses for vision related deficiencies, optometrists are trained in monitoring and treating ocular disease-pathologies. In the United States, newly graduating optometrists are all trained in minor surgical procedures and various laser treatments, including peripheral iridotomies, trabeculoplasties, and capsulotomies. The range of training for optometrists varies greatly between countries. Some countries only require certificate training while others require a doctoral degree. In the United States, optometrists typically hold a four-year college degree, a four-year Doctor of Optometry degree, and have the option to complete a 1-year residency program. By comparison, in the United States, ophthalmologists are medical doctors (MDs and DOs) who typically hold a four-year college degree, a four-year medical degree, and additional years of training after medical school in an ophthalmology residency (typically 3 or 4 years) during which they receive training in ocular surgeries. Etymology The term "optometry" comes from the Greek words ὄψις (opsis; "view") and μέτρον (metron; "something used to measure", "measure", "rule"). The word entered the language when the instrument for measuring vision was called an optometer, (before the terms phoropter or refractor were used). The root word opto is a shortened form derived from the Greek word ophthalmos meaning, "eye." Like most healthcare professions, the education and certification of optometrists are regulated in most countries. Optometric professionals and optometry-related organizations interact with governmental agencies, other healthcare professionals, and the community to deliver eye and vision care. Definition of optometry and optometrist The World Council of Optometry, World Health Organization and about 75 optometry organizations from over 40 countries have adopted the following definition, to be used to describe optometry and optometrist. History Optometric history is tied to the development of vision science (related areas of medicine, microbiology, neurology, physiology, psychology, etc.) optics, optical aids optical instruments, imaging techniques other eye care professions The history of optometry can be traced back to the early studies on optics and image formation by the eye. The origins of optical science (optics, as taught in a basic physics class) date back a few thousand years as evidence of the existence of lenses for decoration has been found in Greece and the Netherlands. It is unknown when the first spectacles were made. The British scientist and historian Sir Joseph Needham, in his Science and Civilization in China, reported the earliest mention of spectacles was in Venetian guild regulations . He suggested that the occasional claim that spectacles were invented in China may have come from a paper by German-American anthropologist Berthold Laufer. Per Needham, the paper by Laufer had many inconsistencies, and that the references in the document used by Laufer were not in the original copies but added during the Ming dynasty. Early Chinese sources mention the eyeglasses were imported. Research by David A. Goss in the United States shows they may have originated in the late 13th century in Italy as stated in a manuscript from 1305 where a monk from Pisa named Rivalto stated "It is not yet 20 years since there was discovered the art of making eyeglasses". Spectacles were manufactured in Italy, Germany, and the Netherlands by 1300. Needham stated spectacles were first made shortly after 1286. In 1907, Laufer stated in his history of spectacles 'the opinion that spectacles originated in India is of the greatest probability and that spectacles must have been known in India earlier than in Europe'. However, as already mentioned, Joseph Needham showed that the references Laufer cited were not in the older and best versions of the document Laufer used, leaving his claims unsupported. In Sri Lanka, it is well-documented that during the reign of King Bhuvanekabahu the IV (AD 1346 – 1353) of the Gampola period the ancient tradition of optical lens making with a natural stone called Diyatarippu was given royal patronage. A few of the craftsmen still live and practice in the original hamlet given to the exponents of the craft by royal decree. But the date of King Bhuvanekabahu is decades after the mention of spectacles in the Venetian guild regulations and after the 1306 sermon by Dominican friar Giordano da Pisa, where da Pisa said the invention of spectacles was both recent and that he had personally met the inventor The German word brille (eyeglasses) is derived from Sanskrit vaidurya. Etymologically, brille is derived from beryl, Latin beryllus, from Greek beryllos, from Prakrit verulia, veluriya, from Sanskrit vaidurya, of Dravidian origin from the city of Velur (modern Belur). Medieval Latin berillus was also applied to eyeglasses, hence German brille, from Middle High German berille, and French besicles (plural) spectacles, altered from old French bericle. Benito Daza de Valdes published the first full book on opticians in 1623, where he mentioned the use and fitting of eyeglasses. In 1692, William Molyneux wrote a book on optics and lenses where he stated his ideas on myopia and problems related to close-up vision. The scientists Claudius Ptolemy and Johannes Kepler also contributed to the creation of optometry. Kepler discovered how the retina in the eye creates vision. From 1773 until around 1829, Thomas Young discovered the disability of astigmatism and it was George Biddell Airy who designed glasses to correct that problem that included sphero-cylindrical lens. Although the term optometer appeared in the 1759 book A Treatise on the Eye: The Manner and Phenomena of Vision by Scottish physician William Porterfield, it was not until the early twentieth century in the United States and Australia that "optometry" began to be used to describe the profession. By the early twenty-first century, however, marking the distinction with dispensing opticians, it had become the internationally accepted term. Diseases A partial list of the common diseases optometrists diagnose/manage: Cataracts Dry eye syndrome Eye tumors Glaucoma Diabetic retinopathy Hypertensive retinopathy Macular degeneration Refractive errors Corneal disease Strabismus Amblyopia Uveitis Diagnosis Eye examination Following are examples of examination methods performed during an eye examination that enables diagnosis Ocular tonometry to determine intraocular pressure Refraction assessment Retina examination Slit lamp examination Visual acuity Color vision test Visual field test Dry eye test Corneal topography Specialized tests Optical coherence tomography (OCT) is a medical technological platform used to assess ocular structures. The information is then used by eye doctors to assess staging of pathological processes and confirm clinical diagnoses. Subsequent OCT scans are used to assess the efficacy of managing diabetic retinopathy, age-related macular degeneration, and glaucoma Training, licensing, representation and scope of practice Optometry is officially recognized in many jurisdictions. Most have regulations concerning education and practice. Optometrists, like many other healthcare professionals, are required to participate in ongoing continuing education courses to stay current on the latest standards of care. Africa In 1993 there were five countries in Africa with optometric teaching institutes: Sudan, Ghana, Nigeria, South Africa and Tanzania. Ethiopia started in 2002 at UoG. There are currently two universities (MMUST & Kaimosi Friends University) offering Bachelor of Science in Optometry and Vision Sciences in Kenya. Sudan Sudan's major institution for the training of optometrists is the Faculty of Optometry and Visual Sciences (FOVS), originally established in 1954 as the Institute of Optometry in Khartoum; the Institute joined with the Ministry of Higher Education in 1986 as the High Institute of Optometry, and was ultimately annexed into Alneelain University in 1997 when it was renamed the FOVS. The FOVS offers several programs: a BSc in Optometry, which takes 5 years and includes sub-specialization in orthoptics, contact lenses, ocular photography, or ocular neurology; a BSc in Ophthalmic Technology, requiring 4 years of training; and a BSc in Optical Dispensary, completed in 4 years. The FOVS also offers MSc and PhD degrees in optometry. The FOVS is the only institute of its kind in Sudan and was the first institution of higher education in Optometry in the Middle East and Africa. In 2010, Alneelain University Eye Hospital was established as part of the FOVS to expand training capacity and to serve broader Sudanese community. Ghana The Ghana Optometric Association (GOA) regulates the practice of Optometry in Ghana. The Kwame Nkrumah University of Science and Technology and the University of Cape Coast are the two universities that offer the degree programme in the country. After the six-year training at any of the two universities offering the course, the O.D. degree is awarded. The new optometrist must write a qualifying exam, after which the optometrist is admitted as a member of the GOA, leading to the award of the title MGOA. Mozambique The first optometry course in Mozambique was started in 2009 at Universidade Lurio, Nampula. The course is part of the Mozambique Eyecare Project. University of Ulster, Dublin Institute of Technology and Brien Holden Vision Institute are supporting partners. As of 2019, 61 Mozambican students had graduated with optometry degrees from UniLúrio (34 male and 27 female). Nigeria In Nigeria, optometry is regulated by the Optometrists and Dispensing Opticians Registration Board of Nigeria established under the Optometrists and Dispensing Opticians (Registration etc.) Act of 1989 (Cap O9 Laws of Federation of Nigeria 2004). The Board publishes from time to time lists of approved qualifications and training institutions in the federal government gazette. Optometry education began at the University of Benin in 1970, initially as a four-year bachelor's degree program, making it the first optometry school in West Africa. In 1980, Abia State University introduced the Doctor of Optometry program. The University of Benin upgraded its program to the Doctor of Optometry degree in 1994. Subsequently, Doctor of Optometry programs were established at other public and private universities. The Doctor of Optometry degree is awarded after six years of training at one of the accredited universities located in Edo, Imo, Kano, Kwara, and Abia states. Asia Bangladesh Optometry was first introduced in Bangladesh in 2010 at the Institute of Community Ophthalmology under the Faculty of Medicine, University of Chittagong. This institute offers a four-year Bachelor of Science in Optometry (B.Optom) course. As of 2017, there are 200 graduate optometrists in Bangladesh. The association that controls the quality of optometry practice across the country is the Optometrists Association of Bangladesh, which is also a country member of the World Council of Optometry (WCO). In 2018, Chittagong Medical University was established, and the BSc in Optometry course was transferred to this university. In Bangladesh, optometrists perform primary eye care like diagnosis and primary management of some ocular diseases, prescribe eye glasses, low vision rehabilitation, provide vision therapy, contact lens practice and all type of orthoptic evaluations and therapies. Registration from Government’s Health Ministry is still pending for unknown reason. China In China, optometric education only began in 1988 at the Wenzhou Medical University. Since that time, the discipline and the profession have emerged as a five-year, medically based program within the medical education system of China. Students in the program receive the highest level of training in Optometry and are provided with the credentials needed to assume positions of leadership in China's medical education and health care systems. In 2000, the Ministry of Health formally accepted Optometry as a subspecialty of medicine. Hong Kong The Optometrists Board of the Supplementary Medical Professions Council regulates the profession in Hong Kong. Optometrists are listed in separate parts of the register based on their training and ability. Registrants are subject to restrictions depending on the part they are listed in. Those who pass the examination on refraction conducted by the Board may be registered to Part III, thereby restricted to practice only work related to refraction. Those who have a Higher Certificate in Optometry or have passed the Board's optometry examination may be registered to Part II, thereby restricted in their use of diagnostic agents, but may otherwise practice freely. Part I optometrists may practice without restrictions and generally hold a bachelor's degree or a Professional Diploma. There are around 2000 optometrists registered in Hong Kong, 1000 of which are Part I. There is one Part I optometrist to about 8000 members of the public. The Polytechnic University runs the only optometry school. It produces around 35 Part I optometrists a year. India In 2010, it was estimated that India needed 115,000 optometrists. In contrast, India has approximately 15,000 optometrists Bachelor of Optometry (4-year trained as per University Grant Commission Notification 5 July 2014 ) and 50,000 Diploma in Optometry (2-year trained diploma conferred By State Medical Faculty). In order to prevent blindness or visual impairment more well-trained optometrists are required in India. The definition of optometry differs considerably in different countries. India needs more optometry schools offering four-year degree courses with a syllabus similar to that in force in those countries where the practice of optometry is statutorily regulated and well established with an internationally accepted definition. In 2013, it was reported in the Indian Journal of Ophthalmology that poor spectacle compliance amongst school children in rural Pune resulted in significant vision loss. In 2015, it was reported that optometrists need to be more involved in providing core optometry services like binocular vision and low vision. History of Optometry Education in India 1. In the beginning optometry education started in India during British rule in 1927, the first college was established in West Bengal with the name The Indian College of Optics and the certification was diploma in optometry. After the independence of India, the Directorate General Of Health Services (DGHS) Government of India in 1958, introduced the first (by the Central Government of india) optometry education in the form of a Diploma in Optometry with the collaboration of UP State Medical Faculty, Government Of Utter Pradesh, under the 2nd 5-year plan. The government offered diplomas in optometry courses of two years duration conferred by State Medical Faculties, empowered under the Indian Medical Degree Act, 1916 (as per Government of India Notification Department of Education, Health and Lands No,1964 dated 16 December 1926, effective from 15 November 1929). The first two schools of optometry were established at Gandhi Eye Hospital, Aligarh in Uttar Pradesh, (the first School of Optometry started by Prof.(Dr) Mohan Lal) and at Sarojini Devi Eye Hospital, Hyderabad in Telangana. 2. Subsequently, four more schools were opened across India, situated at Sitapur Eye Hospital, Sitapur in Uttar Pradesh; Chennai (formerly Madras) in Tamil Nadu; Bengalooru (formerly Bangalore) in Karnataka; and the Regional Institute of Ophthalmology, Thiruvananthapuram (formerly Trivandrum) in Kerala. 3. The Elite School of Optometry (ESO) was established in 1985 in Chennai (The first school of optometry/college started by Prof.(Dr) S. Badrinath) and was the first to offer a four-year degree course Baccalaureate of Science in Optometry (B.S. Optometry). The degree was conferred only by the Shanker Netraliya (Elite School of Optometry and the first principal was Dr. E. Vaithilingam) instead of any University or State Government Authority etc. After that, the B.S. in Optometry (under off-campus mode) was affiliated with Bitis Pilani University, Rajasthan, and now the same course re-affiliated with the new University of State of Tamilnadu, India. 4. The School of Optometry at Bharati Vidyapeeth Deemed University, Pune, established in 1998, was the first to offer a four-year degree course and confer a Bachelor of Clinical Optometry. The university also provided a pathway for diploma holders to upgrade their education to a Degree of Optometry through a lateral entry program. Also, the first 2 years of the Master of Optometry course were introduced in 2003. 5. AIIMS-Delhi introduced a two-year Diploma in Clinical Technology-Optometry (D.C.T. in Optometry) in 1973 and then upgraded the Diploma course to a 3-year B.Sc. (H) in Ophthalmic Technique in 1975. After that, the nomenclature to degree and course of duration changed from B.Sc. (H) to Bachelor of Optometry, four-year duration as per UGC Notification 2014 in the year new first batch of students passed out in the year 2019. 6. At present, there are more than fifty schools of optometry and colleges in India, and over 100 universities confer Bachelor of Optometry (B.Optom) and Master of Optometry (M.Optom) professional degrees. Additionally, Doctor of Philosophy degrees in Optometry are awarded by universities recognized by the University Grants Commission (India), a statutory body responsible for maintaining standards of higher education in India. Optometrists across India are encouraged to register under the National Commission for Allied and Healthcare Professions Act, 2021, which was enacted by the Parliament of India in 2021. The Delhi Optometrists Association (DOA) has endorsed all updates related to optometry education in India. Malaysia It takes four years to complete a degree in optometry. As of 2022, optometry courses have been well received by citizens, with nearly 3,000 registered optometrists. More universities and higher education studies are about to implement the courses, such as the National Institute of Ophthalmic Sciences in Petaling Jaya, which is the academic arm of The Tun Hussein Onn National Eye Hospital. Other public universities that offer this course include University Kebangsaan Malaysia (UKM), Universiti Teknologi Mara (UiTM), and International Islamic University Malaysia (IIUM). There are also private universities that offer this course such as Management and Science University (MSU) and SeGi University. After completing the Degree in Optometry, optometrists who practice in Malaysia must register with the Malaysian Optical Council (MOC), which is an organization under the Ministry of Health. The Association of Malaysian Optometrists (AMO) is the only body that represents the Malaysian optometrist profession. All of the members are either local or overseas graduates in the field of optometry. Pakistan Optometry is taught as a five/four-year Doctor/ Bachelors/ Bachelors with Honors course at many institutions notable among which are Department of Optometry & Vision Sciences (DOVS) FAHS, ICBS, Lahore, Pakistan Institute of Community Ophthalmology (PICO) Peshawar, Pakistan institute of Rehabilitation science Isra University campus Islamabad (PIRS), College of Ophthalmology & Allied Vision Sciences (COAVS) Lahore and Al-Shifa Institute of Ophthalmology Islamabad. After graduation, the optometrists can join a four-tiered service delivery level (Centre of Excellence, Tertiary/Teaching, District headquarter and sub-district /Tehsil headquarters). M.Phil. in Optometry is also available at select institutions such as King Edward Medical University, Lahore. Department of Optometry & Vision Sciences (DOVS) FAHS, ICBS, Lahore started bridging programmes for Bachelors/ Bachelors with Honors to become Doctor of Optometry OD, Post Professional Doctor of Optometry (PP-OD), Transitional Doctor of Optometry (t-OD). Optometry is not yet a regulated field in Pakistan as there is no professional licensing board or authority responsible for issuing practise licenses to qualified optometrists. This creates difficulty for Pakistani optometrists who wish to register abroad. The University of Lahore has recently launched Doctor of Optometry (OD). Imam Hussain Medical University also launched the Doctor of Optometry Program. The chairman of Imam Hussain Medical University, Sabir Hussain Babachan, vowed to regulate the OD curriculum according to international standards. Philippines Optometry is regulated by the Professional Regulation Commission of the Philippines. To be eligible for licensing, each candidate must have satisfactorily completed a doctor of optometry course at an accredited institution and demonstrate good moral character with no previous record of professional misconduct. Professional organizations of optometry in the Philippines include Optometric Association of the Philippines and Integrated Philippine Association of Optometrists, Inc. (IPAO). Saudi Arabia In Saudi Arabia, optometrists must complete a five-year doctor of optometry degree from Qassim University and King Saud University. Also, they must complete a one-year residency. Singapore Tertiary education for optometrists consists of a 3-year diploma in optometry offered at institutions such as Singapore Polytechnic and Ngee Ann Polytechnic Taiwan The education of optometry in Taiwan commenced in 1982 at Shu-Zen College of Medicine and Management. Bachelor degrees in optometry can be obtained from seven universities (North to South): University of Kang Ning, Yuanpei University of Medical Technology, Asia University, Central Taiwan University of Science and Technology, Chung Shan Medical University, Dayeh University, and Chung Hwa University of Medical Technology; whereas associate degrees in optometry can be obtained from Mackay Junior College of Medicine, Nursing and Management, Hsin Sheng College of Medical Care and Management, Jen-Teh Junior College of Medicine, Nursing, and Management, and Shu-Zen College of Medicine and Management. The Law of Optometrists was established in Taiwan in 2015; since then, optometry students after obtaining optometry degrees, need to pass the National Optometry Examination of Taiwan to be registered as optometrists. There are approximately 4,000 optometrists in Taiwan as of 2020, and around 400 new optometrists register annunally (2018-2020). Thailand Since late 1990, Thailand has set a goal to provide more than 600 optometrists to meet the minimal public demands and international standards in vision care. There are more than three university degree programs in Thailand. Each program accepts students that have completed grade 12th or the third year in high school (following US education model). These programs offer "Doctor of Optometry" degree to graduates from the program that will take six years to complete the courses. Practising optometrists will also be required to pass licensing examination (three parts examinations) that is administrated through a committee under the Ministry of Public Health. As of 2015, the number of practicing optometrists in Thailand is still fewer than one hundred. However, it has projected that the number of practising optometrists in Thailand will greatly increase within the next ten years. In the theoretical scenario, the number of optometrists should be able to meet minimal public demands around 2030 or earlier. Europe Since the formation of the European Union, "there exists a strong movement, headed by the Association of European Schools and Colleges of Optometry (AESCO), to unify the profession by creating a European-wide examination for optometry" and presumably also standardized practice and education guidelines within EU countries. The first examinations of the new European Diploma in Optometry were held in 1998 and this was a landmark event for optometry in continental Europe. France As of July 2003, there was no regulatory framework and optometrists were sometimes trained by completing an apprenticeship at an ophthalmologists' private office. Germany Optometric tasks are performed by ophthalmologists and professionally trained and certified opticians. Greece Hellenic Ministry of Education founded the first department of Optometry at Technological Educational Institute of Patras in 2007. After protests from the department of Optics at Technological Educational Institute of Athens (the only department of Optics in Greece, until 2006), the Government changed the names of the departments to "Optics and Optometry" and included lessons in both optics and optometry. Optometrists-Opticians have to complete a 4-year undergraduate honours degree. Then the graduates can be admitted to postgraduate courses in Optometry at universities around the world. Since 2015, a Master of Science (MSc) course in Optometry is offered by the Technological Educational Institute of Athens. The Institute of Vision and Optics (IVO) of the University of Crete focuses on the sciences of vision and is active in the fields of research, training, technology development and provision of medical services. Professor Ioannis Pallikaris has received numerous awards and recognitions for the institute's contribution to ophthalmology. In 1989 he performed the first LASIK procedure on a human eye. Hungary Optometrist education takes 4 years in the medical universities in Hungary, and they will get a Bachelor of Science degree. They work in networks and retail stores and private optics, very few are located in the Health Care care system as ophthalmologists as an assistant. Ireland The profession of Optometry has been represented for over a century by the Association of Optometrists, Ireland [AOI]. In Ireland an optometrist must first complete a four-year degree in optometry at Dublin Institute of Technology. Following successful completion of the degree, an optometrist must then complete professional qualifying examinations to enter the register of the Opticians Board [Bord na Radharcmhaistoiri]. Optometrists must be registered with the Board to practice in the Republic of Ireland. The A.O.I. runs a comprehensive continuing education and professional development program on behalf of Irish optometrists. The legislation governing optometry was drafted in 1956. Some feel that the legislation restricts optometrists from using their full range of skills, training and equipment for the benefit of the Irish public. The amendment to the Act in 2003 addressed one of the most significant restrictions: the use of cycloplegic drugs to examine children. Italy In Italy Optometry is an unregulated profession. It is taught at seven universities: Padua, Turin, Milan, Salento, Florence, Naples and Rome, as three years course (like a BSc) of "Scienze e tecnologie fisiche" as a sector of the Physics Department. Additionally, courses are available at some private institutions (as at Vinci Institute near Firenze) that offer advanced professional education for already qualified opticians (most of the Italian optometrists are also qualified opticians, i.e. "ottico abilitato"). In the last thirty years several verdicts from High Court (Cassazione) proof that optometry is a free practice and has truly education path. Norway In Norway, the optometric profession has been regulated as a healthcare profession since 1988. After a three-year bachelor program, one can practice basic optometry. At least one year in clinical practice qualify for a post-degree half-year sandwich course in contact lens fitting, which is regulated as a healthcare speciality. A separate regulation for the use of diagnostic drugs in optometric practice was introduced in 2004. Russia In Russia, optometry education has been accredited by the Federal Agency of Health and Social Development. There are only two educational institutions that teach optometry in Russia: Saint Petersburg Medical Technical College, formerly known as St. Petersburg College of Medical Electronics and Optics, and The Helmholtz Research Institute for Eye Diseases. They both belong and are regulated by the Ministry of Health. The optometry program is a four-year program. It includes one to two science foundation years, one yer focused on clinical and proficiency skills, and one year of clinical rotations in hospitals. Graduates take college/state examinations and then receive a specialist diploma. This diploma is valid for only five years and must be renewed every five years after receiving additional training at state-accredited programs. The scope of practice for optometrists in Russia includes refraction, contact lens fitting, spectacles construction and lens fitting (dispensing), low vision aids, foreign body removal, referrals to other specialists after clinical condition diagnoses (management of diseases in the eye). United Kingdom Licensing Optometrists in the United Kingdom are regulated by the General Optical Council under the Opticians Act 1989 and distinguished from medical practitioners. Registration with the GOC is mandatory to practice optometry in the UK. Members of the College of Optometrists (incorporated by a Royal Charter granted by Her Majesty Queen Elizabeth II) may use the suffix MCOptom. The National Health Service provides free sight tests and spectacle vouchers for children and those on very low incomes. The elderly and those with some chronic conditions like diabetes get free periodic tests. Treatment for eye conditions such as glaucoma and cataracts is free and checked for during normal eye examinations. Training In the United Kingdom, optometrists have to complete a 4-year undergraduate honours degree followed by a minimum of a one-year internship, "pre-registration period", during which they complete clinical practice under the supervision of a qualified and experienced practitioner. During this year the pre-registration candidate is given a number of quarterly assessments, often including temporary posting at a hospital, and on successfully passing all of these assessments, a final one-day set of examinations (details correct for candidates from 2006). Following successful completion of these assessments and having completed one year's supervised practice, the candidate is eligible to register as an optometrist with the General Optical Council (GOC) and, should they so wish, are entitled to membership of the College of Optometrists. Twelve universities offer Optometry in the UK: Anglia Ruskin, Aston, Bradford, Cardiff, City, Glasgow Caledonian, Hertfordshire, Manchester, Plymouth, Portsmouth, Ulster at Coleraine and West of England. In 2008 the UK moved forward to offer the Doctor of Optometry postgraduate programme. This became available at the Institute of Optometry in London in partnership with London South Bank University. The Doctor of Optometry postgraduate degree is also offered at one other UK institution: Aston University. Scope of Practice In 1990, a survey of the opinions of British medical practitioners regarding the services provided by British optometrists was carried out by Agarwal at City, University of London. A majority of respondents were in favour of optometrists extending their professional role by treating external eye conditions and prescribing broad-spectrum topical antibiotics through additional training and certification. Since 2009, optometrists in the UK have been able to undertake additional postgraduate training and qualifications that allow them to prescribe medications to treat and manage eye conditions. There are currently three registerable specialities: Additional supply speciality – to write orders for, and supply in an emergency, a range of drugs in addition to those ordered or supplied by a normal optometrist. Supplementary prescribing speciality – to manage a patient's clinical condition and prescribe medicines according to a clinical management plan set up in conjunction with an independent prescriber, such as a GP or ophthalmologist or qualified optometrist. Independent prescribing specialty – to take responsibility for the clinical assessment of a patient, establish a diagnosis and determine the clinical management required, including prescribing where necessary. Optometrists in the United Kingdom are able to diagnose and manage most ocular diseases, and may also undertake further training to perform certain surgical procedures. North America Canada Training In Canada, Doctors of Optometry typically complete four years of undergraduate studies followed by four to five years of optometry studies, accredited by the Accreditation Council on Optometric Education. There are two such schools of optometry located in Canada — the University of Waterloo and the Université de Montreal. Canada also recognizes degrees from the twenty US schools. Licensing In Canada, Doctors of Optometry must write national written and practical board exams. Additionally, optometrists are required to become licensed in the province in which they wish to practice. Regulation of professions is within provincial jurisdiction. Therefore, regulation of optometry is unique to individual provinces and territories. In Ontario, optometrists are licensed by the College of Optometrists of Ontario. Representation In Canada, the profession is represented by the Canadian Association of Optometrists. In the province of Ontario, the Ontario Association of Optometrists is the designated representative of optometrists to the provincial government. Scope of Practice Optometrists in Canada are trained and licensed to be primary eye care providers. They provide optical and medical eye care. They are able to diagnose and treat most eye diseases and can prescribe both topical and oral medications. They can also undertake further qualifications in order to perform some surgical procedures. United States Optometrists, Doctors of Optometry, or Optometric Physicians are primary eye care providers. They provide comprehensive optical and medical eye care. They are trained and licensed to practice medicine for eye related conditions - prescribe topical medications (prescription eye drops), oral medications as well as administer diagnostic agents. In some states, optometrists may also be licensed to perform certain types of eye surgery. Scope of practice Optometrists provide optical and medical eye care. They prescribe corrective lenses to aid refractive errors (e.g. myopia, hyperopia, presbyopia, astigmatism, double vision). They manage vision development in children including amblyopia diagnosis/treatment. Some perform vision therapy. They are trained to diagnose and manage any eye disease and their associations with systemic health. Optometrists are trained and licensed to practice medicine for eye-related conditions (including bacterial/viral infections, inflammation, glaucoma, macular degeneration, and diabetic retinopathy). They can prescribe all topical medications (eye drops) and most oral medications (taken by mouth), including scheduled controlled substances. They may also remove ocular foreign bodies and order blood panels or imaging studies such as CT or MRI. Optometrists do not perform invasive surgery, however In Oklahoma and Louisiana, Optometrists may perform superficial surgeries within the anterior segment of the eye. Legislation permits Optometrists in Oklahoma and Kentucky to perform certain laser procedures. Within their scope of practice optometrists are considered physicians and bill medical insurance plans accordingly. Optometrists in the United States are regulated by state boards, which vary from state to state. The Association of Regulatory Boards of Optometry (ARBO) assists these state board licensing agencies in regulating the practice of optometry. Licensing Optometrists must complete all course work and graduate from an accredited College of Optometry. This includes passage of all parts of the national board examinations as well as local jurisprudence examinations, which vary by state. Education and Training Optometrists typically complete four years of undergraduate studies followed by four years of Optometry school. Some complete a 5th year of training. Their program is highly specific to the eyes and related structures. Optometrists receive their medical eye training while enrolled in Optometry school and during internships. Training may take place in colleges of Optometry, hospitals, clinics and private practices. In many instances Optometry students and Ophthalmology residents will co-manage medical cases. Instructors may be Optometrists, professors or physicians. The program includes extensive classroom and clinical training in geometric, physical, physiological and ophthalmic optics, specialty contact lens evaluation, general anatomy, ocular anatomy, ocular disease, pharmacology, ocular pharmacology, neuroanatomy and neurophysiology of the visual system, pediatric visual development, gerontology, binocular vision, color vision, form, space, movement and vision perception, systemic disease, histology, microbiology, sensory and perceptual psychology, biochemistry, statistics and epidemiology. Optometrists are required to obtain continuing education credit hours to maintain licensure - number of hours varies by state. Optometrists prescribing schedule controlled substances are required to renew their DEA license every few years. Oceania Australia Australia currently has six recognized courses in optometry, and one course seeking to obtain accreditation with the Optometry council of Australia and New Zealand: Bachelor of Vision Science and Master of Optometry (BVisSci MOptom), Deakin University Bachelor of Medical Science (Vision Science) and Master of Optometry, Flinders University Bachelor of Vision Science and Master of Clinical Optometry (BVisSc MClinOptom), University of New South Wales Bachelor of Vision Science and Master of Optometry, Queensland University of Technology Bachelor of Vision Science and Master of Optometry, University of Canberra Doctor of Optometry, Melbourne University (post-graduate) Doctor of Optometry, University of Western Australia (post-graduate) To support these courses the Australian College of Optometry provides clinical placements to undergraduate students from Australian Universities and abroad. in 2016, almost 5000 optometrists in general practice were licensed with their regulatory body, the Optometry Board of Australia. Of these, approximately 2300 were registered with the scheduled medicines endorsement, which enables them to prescribe some medicines for the treatment of conditions of the eye. The Optometrists Association of Australia works to protect the interests of optometrists in Australia. New Zealand New Zealand currently has one recognised course in optometry: Bachelor of Optometry (BOptom), The University of Auckland In July 2014, the Medicines Amendment Act 2013 and Misuse of Drugs Amendment Regulations 2014 came into effect. Among other things, the changes to the Act name optometrists as authorised prescribers. This change enables optometrists with a therapeutic pharmaceutical agent (TPA) endorsement to prescribe all medicines appropriate to their scope of practice, rather than limiting them to a list of medicines specified in the regulation; this recognises the safe and appropriate prescribing practice of optometrists over the previous nine years. South America Brazil The CBOO (Brazilian Council of Optics and Optometry), which is affiliated to the WCO (World Council of Optometry), represents Brazilian optometrists. In conjunction with organizations representative weight of Brazilian companies, including the National Commerce Confederation for goods, services and tourism (CNC), through the CBÓptica/CNC, its defence arm of the optometric and optical industry, are defending the right of free and independent practice of optometrists, even if it is against the interests of ophthalmologists. The Federal Supreme Court (STF), the Brazilian Court of Justice and the Superior Court of Justice (STJ), another important National Court, ruled several processes granting inquestionable victories to ophthalmologists. In Brazilian law, however, there is an explicit recommendation that the one prescribing corrective lenses are prohibited to sell them. This restricting rule to the ophthalmologists has to keep the optic shops away from Hospitals and Eye Care Clinics since 1930, and it has to be reviewed before any further regulation for the optometrists. Colombia In Colombia, optometry education has been accredited by the Ministry of Health. The last official revision to the laws regarding healthcare standards in the country was issued in 1992 through the Law 30. Currently there are eight official universities that are entitled by ICFES to grant the optometrist certification. The first optometrists arrived in the country from North America and Europe . These professionals specialized in optics and refraction. In 1933, under Decrees 449 and 1291, the Colombian Government officially set the rules for the formation of professionals in the field of optometry. In 1966 La Salle University opened its first Faculty of Optometry after a recommendation from a group of professionals. At present optometrists are encouraged to keep up with new technologies through congresses and scholarships granted by the government or the private sector (such as Bausch & Lomb).
Biology and health sciences
Fields of medicine
null
4266335
https://en.wikipedia.org/wiki/Thirty%20Meter%20Telescope
Thirty Meter Telescope
The Thirty Meter Telescope (TMT) is a planned extremely large telescope (ELT) proposed to be built on Mauna Kea, on the island of Hawai'i. The TMT would become the largest visible-light telescope on Mauna Kea. Scientists have been considering ELTs since the mid 1980s. In 2000, astronomers considered the possibility of a telescope with a light-gathering mirror larger than in diameter, using either small segments that create one large mirror, or a grouping of larger mirrors working as one unit. The US National Academy of Sciences recommended a telescope be the focus of U.S. interests, seeking to see it built within the decade. Scientists at the University of California, Santa Cruz and Caltech began development of a design that would eventually become the TMT, consisting of a 492-segment primary mirror with nine times the power of the Keck Observatory. Due to its light-gathering power and the optimal observing conditions which exist atop Mauna Kea, the TMT would enable astronomers to conduct research which is infeasible with current instruments. The TMT is designed for near-ultraviolet to mid-infrared (0.31 to 28 μm wavelengths) observations, featuring adaptive optics to assist in correcting image blur. The TMT will be at the highest altitude of all the proposed ELTs. The telescope has government-level support from several nations. The proposed location on Mauna Kea has been controversial among the Native Hawaiian community. Demonstrations attracted press coverage after October 2014, when construction was temporarily halted due to a blockade of the roadway. When construction of the telescope was set to resume, construction was blocked by further protests each time. In 2015, Governor David Ige announced several changes to the management of Mauna Kea, including a requirement that the TMT's site will be the last new site on Mauna Kea to be developed for a telescope. The Board of Land and Natural Resources approved the TMT project, but the Supreme Court of Hawaii invalidated the building permits in December 2015, ruling that the board had not followed due process. In October 2018, the Court approved the resumption of construction; however, no further construction has occurred due to continued opposition. In July 2023 a new state appointed oversight board, which includes Native Hawaiian community representatives and cultural practitioners, began a five-year transition to assume management over Mauna Kea and its telescope sites, which may be a path forward. In April 2024, TMT's project manager apologized for the organization having "contributed to division in the community", and stated that TMT's approach to construction in Hawai'i is "very different now from TMT in 2019." An alternate site for the Thirty Meter Telescope has been proposed for La Palma, Canary Islands, Spain, but is considered less scientifically favorable by astronomers. , there were no specific timelines or schedules regarding new start or completion dates. Background In 2000, astronomers began considering the potential of telescopes larger than in diameter. The technology to build a mirror larger than does not exist; instead scientists considered two methods: either segmented smaller mirrors as used in the Keck Observatory, or a group of 8-meter (26') mirrors mounted to form a single unit. The US National Academy of Sciences made a suggestion that a telescope should be the focus of US astronomy interests and recommended that it be built within the decade. The University of California, along with Caltech, began development of a 30-meter telescope that same year. The California Extremely Large Telescope (CELT) began development, along with the Giant Segmented Mirror Telescope (GSMT), and the Very Large Optical Telescope (VLOT). These studies would eventually define the Thirty Meter Telescope. The TMT would have nine times the collecting area of the older Keck telescope using slightly smaller mirror segments in a vastly larger group. Another telescope of a large diameter in the works is the Extremely Large Telescope (ELT) being built in northern Chile. The telescope is designed for observations from near-ultraviolet to mid-infrared (0.31 to 28 μm wavelengths). In addition, its adaptive optics system will help correct for image blur caused by the atmosphere of the Earth, helping it to reach the potential of such a large mirror. Among existing and planned extremely large telescopes, the TMT will have the highest elevation and will be the second-largest telescope once the ELT is built. Both use segments of small hexagonal mirrors—a design vastly different from the large mirrors of the Large Binocular Telescope (LBT) or the Giant Magellan Telescope (GMT). Each night, the TMT would collect 90 terabytes of data. The TMT has government-level support from the following countries: Canada, Japan and India. The United States is also contributing some funding, but less than the formal partnership. Proposed locations In cooperation with AURA, the TMT project completed a multi-year evaluation of six sites: Roque de los Muchachos Observatory, La Palma, Canary Islands, Spain Cerro Armazones, Antofagasta Region, Republic of Chile Cerro Tolanchar, Antofagasta Region, Republic of Chile Cerro Tolar, Antofagasta Region, Republic of Chile Mauna Kea, Hawaii, United States (This site was chosen and approval was granted in April 2013) San Pedro Mártir, Baja California, Mexico Hanle, Ladakh, India The TMT Observatory Corporation board of directors narrowed the list to two sites, one in each hemisphere, for further consideration: Cerro Armazones in Chile's Atacama Desert and Mauna Kea on Hawaii Island. On July 21, 2009, the TMT board announced Mauna Kea as the preferred site. The final TMT site selection decision was based on a combination of scientific, financial, and political criteria. Chile is also where the European Southern Observatory is building the ELT. If both next-generation telescopes were in the same hemisphere, there would be many astronomical objects that neither could observe. The telescope was given approval by the state Board of Land and Natural Resources in April 2013. There has been opposition to the building of the telescope, based on potential disruption to the fragile alpine environment of Mauna Kea due to construction, traffic, and noise, which is a concern for the habitat of several species, and because Mauna Kea is a sacred site for the Native Hawaiian culture. The Hawaii Board of Land and Natural Resources conditionally approved the Mauna Kea site for the TMT in February 2011. The approval has been challenged; however, the Board officially approved the site following a hearing on February 12, 2013. Partnerships and funding The Gordon and Betty Moore Foundation has committed US$200 million for construction. Caltech and the University of California have committed an additional US$50 million each. Japan, which has its own large telescope at Mauna Kea, the Subaru, is also a partner. In 2008, the National Astronomical Observatory of Japan (NAOJ) joined TMT as a collaborator institution. The following year, the telescope cost was estimated to be $970 million to $1.4 billion. That same year, the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) joined TMT as an observer. The observer status is the first step in becoming a full partner in the construction of the TMT and participating in the engineering development and scientific use of the observatory. By 2024, China was not a partner in TMT. In 2010, a consortium of Indian Astronomy Research Institutes (IIA, IUCAA and ARIES) joined TMT as an observer, subject to approval of funding from the Indian government. Two years later, India and China became partners with representatives on the TMT board. Both countries agreed to share the telescope construction costs, expected to top $1 billion. India became a full member of the TMT consortium in 2014. In 2019 the India-based company Larsen & Toubro (L&T) were awarded the contract to build the segment support assembly (SSA), which "are complex optomechanical sub-assemblies on which each hexagonal mirror of the 30-metre primary mirror, the heart of the telescope, is mounted". The IndiaTMT Optics Fabricating Facility (ITOFF) will be constructed at the Indian Institute of Astrophysics campus in the city of Hosakote, near Bengaluru. India will supply 80 of the 492 mirror segments for the TMT. A.N. Ramaprakash, the associate programme director of India-TMT, stated; "All sensors, actuators and SSAs for the whole telescope are being developed and manufactured in India, which will be put together in building the heart of TMT", also adding; "Since it is for the first time that India is involved in such a technically demanding astronomy project, it is also an opportunity to put to test the abilities of Indian scientists and industries, alike." The continued financial commitment from the Canadian government had been in doubt due to economic pressures. In April 2015, Prime Minister Stephen Harper announced that Canada would commit $243.5 million over a period of 10 years. The telescope's unique enclosure was designed by Dynamic Structures Ltd. in British Columbia. In a 2019 online petition, a group of Canadian academics called on succeeding Canadian Prime Minister Justin Trudeau together with Industry Minister Navdeep Bains and Science Minister Kirsty Duncan to divest Canadian funding from the project. , the Canadian astronomy community has named TMT its top facility priority for the decade ahead. Design The TMT would be housed in a general-purpose observatory capable of investigating a broad range of astrophysical problems. The total diameter of the dome will be with the total dome height at (comparable in height to an eighteen-storey building). The total area of the structure is projected to be within a complex. Telescope The centerpiece of the TMT Observatory is to be a Ritchey-Chrétien telescope with a diameter primary mirror. This mirror is to be segmented and consist of 492 smaller (), individual hexagonal mirrors. The shape of each segment, as well as its position relative to neighboring segments, will be controlled actively. A secondary mirror is to produce an unobstructed field-of-view of 20 arcminutes in diameter with a focal ratio of 15. A flat tertiary mirror is to direct the light path to science instruments mounted on large Nasmyth platforms. The telescope is to have an alt-azimuth mount. Target acquisition and system configuration capabilities need to be achieved within 5 minutes, or ten minutes if relocating to a newer device. To achieve these time limitations the TMT will use a software architecture linked by a service based communications system. The moving mass of the telescope, optics, and instruments will be about . The design of the facility descends from the Keck Observatory. Adaptive optics Integral to the observatory is a Multi-Conjugate Adaptive Optics (MCAO) system. This MCAO system will measure atmospheric turbulence by observing a combination of natural (real) stars and artificial laser guide stars. Based on these measurements, a pair of deformable mirrors will be adjusted many times per second to correct optical wave-front distortions caused by the intervening turbulence. This system will produce diffraction-limited images over a 30-arc-second diameter field-of-view, which means that the core of the point spread function will have a size of 0.015 arc-second at a wavelength of 2.2 micrometers, almost ten times better than the Hubble Space Telescope. Scientific instrumentation Early-light capabilities Three instruments are planned to be available for scientific observations: Wide Field Optical Spectrometer (WFOS) provides a seeing limit that goes down to the ultraviolet with optical (0.3–1.0 μm wavelength) imaging and spectroscopy capable of 40-square arc-minute field-of-view. The TMT will use precision cut focal plane masks and enable long-slit observations of individual objects as well as short-slit observations of hundreds of different objects at the same time. The spectrometer will use natural (uncorrected) seeing images. Infrared Imaging Spectrometer (IRIS) mounted on the observatory MCAO system, capable of diffraction-limited imaging and integral-field spectroscopy at near-infrared wavelengths (0.8–2.5 μm). Principal investigators are James Larkin of UCLA and Anna Moore of Caltech. Project scientist is Shelley Wright of UC San Diego. Infrared Multi-object Spectrometer (IRMS) allowing close to diffraction-limited imaging and slit spectroscopy over a 2 arc-minute diameter field-of-view at near-infrared wavelengths (0.8–2.5 μm). Approval process and protests In 2008, the TMT corporation selected two semi-finalists for further study, Mauna Kea and Cerro Amazones. In July 2009, Mauna Kea was selected. Once TMT selected Mauna Kea, the project began a regulatory and community process for approval. Mauna Kea is ranked as one of the best sites on Earth for telescope viewing and is home to 13 other telescopes built at the summit of the mountain, within the Mauna Kea Observatories grounds. Telescopes generate money for the big island, with millions of dollars in jobs and subsidies gained by the state. The TMT would be one of the most expensive telescopes ever created. However, the proposed construction of the TMT on Mauna Kea sparked protests and demonstrations across the state of Hawaii. Mauna Kea is the most sacred mountain in Hawaiian culture as well as conservation land held in trust by the state of Hawaii. 2010-2014: Initial approval, permit and contested case hearing In 2010 Governor Linda Lingle of the State of Hawaii signed off on an environmental study after 14 community meetings. The BLNR held hearings on December 2 and December 3, 2010, on the application for a permit. On February 25, 2011, the board granted the permits after multiple public hearings. This approval had conditions, in particular, that a hearing about contesting the approval be heard. A contested case hearing was held in August 2011, which led to a judgment by the hearing officer for approval in November 2012. The telescope was given approval by the state Board of Land and Natural Resources in April 2013. This process was challenged in court with a lower court ruling in May 2014. The Intermediate Court of Appeals of the State of Hawaii declined to hear an appeal regarding the permit until the Hawaii Department of Land and Natural Resources first issued a decision from the contested case hearing that could then be appealed to the court. 2014-2015: First blockade, construction halts, State Supreme Court invalidates permit The dedication and ground-breaking ceremony was held, but interrupted by protesters on October 7, 2014. The project became the focal point of escalating political conflict, police arrests and continued litigation over the proper use of conservation lands. Native Hawaiian cultural practice and religious rights became central to the opposition, with concerns over the lack of meaningful dialogue during the permitting process. In late March 2015, demonstrators again halted the construction crews. On April 2, 2015, about 300 protesters gathered on Mauna Kea, some of them trying to block the access road to the summit; 23 arrests were made. Once the access road to the summit was cleared by the police, about 40 to 50 protesters began following the heavily laden and slow-moving construction trucks to the summit construction site. On April 7, 2015, the construction was halted for one week at the request of Hawaii state governor David Ige, after the protest on Mauna Kea continued. Project manager Gary Sanders stated that TMT agreed to the one-week stop for continued dialogue; Kealoha Pisciotta, president of Mauna Kea Anaina Hou, one of the organizations that have challenged the TMT in court, viewed the development as positive but said opposition to the project would continue. On April 8, 2015, Governor Ige announced that the project was being temporarily postponed until at least April 20, 2015. Construction was set to begin again on June 24, though hundreds of protesters gathered on that day, blocking access to the construction site for the TMT. Some protesters camped on the access road to the site, while others rolled large rocks onto the road. The actions resulted in 11 arrests. The TMT company chairman stated: "T.M.T. will follow the process set forth by the state." A revised permit was approved on September 28, 2017, by the Hawaii Board of Land and Natural Resources. On December 2, 2015, the Hawaii State Supreme Court ruled the 2011 permit from the State of Hawaii Board of Land and Natural Resources (BLNR) was invalid ruling that due process was not followed when the Board approved the permit before the contested case hearing. The high court stated: "BLNR put the cart before the horse when it approved the permit before the contested case hearing," and "Once the permit was granted, Appellants were denied the most basic element of procedural due process – an opportunity to be heard at a meaningful time and in a meaningful manner. Our Constitution demands more". 2017-2019: BLNR hearings, Court validates revised permit In March 2017, the BLNR hearing officer, retired judge Riki May Amano, finished six months of hearings in Hilo, Hawaii, taking 44 days of testimony from 71 witnesses. On July 26, 2017, Amano filed her recommendation that the Land Board grant the construction permit. On September 28, 2017, the BLNR, acting on Amano's report, approved, by a vote of 5-2, a Conservation District Use Permit (CDUP) for the TMT. Numerous conditions, including the removal of three existing telescopes and an assertion that the TMT is to be the last telescope on the mountain, were attached to the permit. On October 30, 2018, the Supreme Court of Hawaii ruled 4-1, that the revised permit was acceptable, allowing construction to proceed. On July 10, 2019, Hawaii Gov. David Ige and the Thirty Meter Telescope International Observatory jointly announced that construction would begin the week of July 15, 2019. 2019 blockade and aftermath On July 15, 2019, renewed protests blocked the access road, again preventing construction from commencing. On July 17, 38 protestors were arrested, all of whom were kupuna (elders) as the blockage of the access road continued. The blockade lasted 4 weeks and shut down all 12 observatories on Mauna Kea, the longest shut down in the 50-year history of the observatories. The full shut down ended when state officials brokered a deal that included building a new road around the campsite of the demonstrations and providing a complete list of vehicles accessing the road to show they are not associated with the TMT. The protests were labeled a fight for indigenous rights and a field-defining moment for astronomy. While there is both native and non-native Hawaiian support for the TMT, a "substantial percentage of the native Hawaiian population" oppose the construction and see the proposal itself as a continued disregard to their basic rights. The 50 years of protests against the use of Mauna Kea has drawn into question the ethics of conducting research with telescopes on the mountain. The controversy is about more than the construction and is about generations of conflict between Native Hawaiians, the U.S. Government and private interests. The American Astronomical Society stated through their Press Officer, Rick Fienberg; "The Hawaiian people have numerous legitimate grievances concerning the way they’ve been treated over the centuries. These grievances have simmered for many years, and when astronomers announced their intention to build a new giant telescope on Maunakea, things boiled over". On July 18, 2019, an online petition titled "Impeach Governor David Ige" was posted to Change.org. The petition gathered over 25,000 signatures. The governor and others in his administration received death threats over the construction of the telescope. On December 19, 2019, Hawaii Governor David Ige announced that the state would reduce its law enforcement personnel on Mauna Kea. At the same time, the TMT project stated it was not prepared to start construction anytime soon. 2020s Early in 2020, TMT and the Giant Magellan Telescope (GMT) jointly presented their science and technical readiness to the U.S. National Academies Astro2020 panel. Chile is the site for GMT in the south and Mauna Kea is being considered as the primary site for TMT in the north. The panel has produced a series of recommendations for implementing a strategy and vision for the coming decade of U.S. Astronomy & Astrophysics frontier research and prioritize projects for future funding. In July 2020, TMT confirmed it would not resume construction on TMT until 2021, at the earliest. The COVID-19 pandemic resulted in TMT's partnership working from home around the world and presented a public health threat as well as travel and logistical challenges. On August 13, 2020, the Speaker of the Hawaii House of Representatives, Scott Saiki announced that the National Science Foundation (NSF) has initiated an informal outreach process to engage stakeholders interested in the Thirty Meter Telescope project. After listening to and considering the stakeholders’ viewpoints, the NSF acknowledged a delay in the environmental review process for TMT while seeking to provide a more inclusive, meaningful, and culturally appropriate process. In November 2021, Fengchuan Liu was appointed the Project Manager of TMT and moved his office to Hilo. , no further construction was announced or initiated. Continued progress on instrument design, mirror casting & polishing, and other critical operational technicalities were worked through or were being worked on. In July 2023 a new state appointed board, the Maunakea Stewardship Oversight Authority, began a five-year transition to assume management over the Mauna Kea site and all telescopes on the mountain. While there are no specific timelines or schedules regarding new start or completion dates, activist Noe Noe Wong-Wilson is quoted by Astronomy magazine as saying, "It's still early in the life of the new authority, but there's actually a pathway forward." The authority includes representatives from Native Hawaiian communities and cultural practitioners as well as astronomers and others. The body will have full control of the site from July 2028. Opposition in the Canary Islands In response to the ongoing protests that occurred in July 2019, the TMT project officials requested a building permit for a second site choice, the Spanish island of La Palma in the Canary Islands. Rafael Rebolo, the director of the Canary Islands Astrophysics Institute, confirmed that he had received a letter requesting a building permit for the site as a backup in case the Hawaii site cannot be constructed. Some astronomers argue however that La Palma is not an adequate site to build the telescope due to the island’s comparatively low elevation, which would enable water vapor to frequently interfere with observations due to water vapor’s tendency to absorb light at midinfrared wavelengths. Such atmospheric interference could impact observing times for research into exoplanets, galactic formation, and cosmology. Other astronomers argue that construction of the telescope in La Palma would disrupt projected international collaboration between the United States and other involved countries such as Japan, Canada, and France. Environmentalists such as Ben Magec and the environmental advocacy organization Ecologistas en Acción in the Canary Islands are gearing up to fight against its construction there as well. According to EEA spokesperson Pablo Bautista, the projected TMT construction area in the Canary Islands exists inside a protected conservation refuge which hosts at least three archeological sites of the indigenous Guanche people, who lived on the islands for thousands of years before Spanish colonization. On July 29, 2021, Judge Roi López Encinas of the High Court of Justice of the Canary Islands, revoked the 2017 concession of public lands by local authorities for the TMT construction. Encinas ruled that the land concessions were invalid as they were not covered by an international treaty on scientific research and that the TMT International Observatory consortium did not express concrete intent to build on the La Palma site as opposed to the site in Mauna Kea. On July 19, 2022, The National Science Foundation announced it will carry out a new environmental survey of the possible impacts of the construction of the Thirty Meter Telescope at proposed building sites at both Mauna Kea and at the Canary Islands. Continued funding for the telescope will not be considered prior to the results of the environmental survey, updates on the project's technical readiness, and comments from the public. By 2023, TIO has addressed all protests and they are clear to build there now.
Technology
Ground-based observatories
null
4267582
https://en.wikipedia.org/wiki/Bordetella%20pertussis
Bordetella pertussis
Bordetella pertussis is a Gram-negative, aerobic, pathogenic, encapsulated coccobacillus bacterium of the genus Bordetella, and the causative agent of pertussis or whooping cough. Its virulence factors include pertussis toxin, adenylate cyclase toxin, filamentous haemagglutinin, pertactin, fimbria, and tracheal cytotoxin. The bacteria are spread by airborne droplets and the disease's incubation period is 7–10 days on average (range 6–20 days). Humans are the only known reservoir for B. pertussis. The complete B. pertussis genome of 4,086,186 base pairs was published in 2003. Compared to its closest relative B. bronchiseptica, the genome size is greatly reduced. This is mainly due to the adaptation to one host species (human) and the loss of capability of survival outside a host body. Like B. bronchiseptica, B. pertussis can express a flagellum-like structure, even though it has been historically categorized as a nonmotile bacterium. Taxonomy The genus Bordetella contains nine species: B. pertussis, B. parapertussis, B. bronchiseptica, B. avium, B. hinzii, B. holmesii, B. trematum, B. ansorpii, and B. petrii. B. pertussis, B. parapertussis and B. bronchiseptica form a closely related phylogenetical group. B. parapertussis causes a disease similar to whooping cough in humans, and B. bronchiseptica infects a range of mammal hosts, including humans, and causes a spectrum of respiratory disorders. Evolution The disease pertussis was first described by French physician Guillaume de Baillou after the epidemic of 1578. The disease may have been described earlier in a Korean medical textbook. The causative agent of pertussis was identified and isolated by Jules Bordet and Octave Gengou in 1906. It is believed that the genus Bordetella may have evolved from ancestors that could survive in the soil according to 16S rRNA gene sequencing data. 16S rRNA is a component of all bacteria that allows for the comparison of phyla within a sample. The expansion of human development into the agricultural field caused there to be an influx of human to soil contact. This increase not only created more advantageous environments for the ancestors of Bordetella not only to thrive in, but to spread to humans as well. Over time, Bordetella, like B. pertussis, has adapted to specifically infect humans and they are still able to multiply and thrive in soil conditions. It was initially determined that B. pertussis is a monomorphic pathogen in which the majority of strains found had the same two types of alleles: ptxA1 or ptxA2. Modern developments in genome sequencing have allowed B. pertussis to be studied more allowing for the discovery of the ptxP region. Through studying the gene, there has been evidence of mutations within the gene that show missing genomes present on the DNA strand. A study by Bart et al. revealed that 25% of the genes on the Tohama I reference strain of the B. pertussis sequence were missing in comparison to the ancestral strains. These mutations were noted to be caused by an increase in intragenomic recombination with loss of DNA. Genes controlled by the BvgAS system have transformed B. pertussis into a much more contagious pathogen. In particular, strains with the ptxP3 allele, that developed through mutations in recent years, have an increased expression of toxins. Ultimately, this leads to higher acuteness of the disease when contracted.   This has causes an upwards trend of most cases of B. pertussis being the ptxP3 strain, especially in developing countries. Since the 1990s, most cases in developed countries such as the United States have ptxP3 isolates rather than the ptxA1 causing it to become the more dominant strain. Growth requirements Bordetella pertussis prefers aerobic conditions in pH range of 7.0–7.5, optimal to thrive in the human body. The max pH level for their growth was at a pH level of 8.0. The minimum pH range for minimal growth was at pH 6.0-6.5. The bacteria are not able to reproduce at pH levels lower than 5.0. In addition, Bordetella pertussis favors a temperature range of 35 °C to 37 °C. It is a strict aerobe as mentioned previously and its nutritional requirements are meticulous in its requirement for nicotinamide supplement. It has been identified that the growth of the bacteria is hindered in the presence of fatty acids, peroxide media, metal ions, and sulfides. As a strict aerobe, the bacterium requires oxygen to grow and sustain. Such aerobes undergo cellular respiration to metabolize substances using oxygen. In such respiration, the terminal electron acceptor for the electron transport chain is oxygen. The organism is oxidase positive, but urease, nitrate reductase, and citrate negative. Metabolism B. pertussis presents unique challenges and opportunities for metabolic modeling, especially given its reemergence as a pathogen. Elevated glutamate levels were found to slow growth due to oxidative stress, revealing a complex relationship. This effect is compounded by observations suggesting that a small starting population could amplify oxidative stress through quorum sensing, a phenomenon deserving further investigation. When B. pertussis is in a balanced medium of lactate and glutamate that does not accumulate ammonium, a partially faulty citric acid cycle in B. pertussis and its ability to synthesize and break down β-hydroxybutyrate is observed. Cultivating B. pertussis in this medium resulted in some production of polyhydroxybutyrate but no excretion of β-hydroxybutyrate, indicating a more efficient conversion of carbon into biomass compared to existing media formulations. In biofilm conditions, B. pertussis cells exhibited increased toxin levels alongside reduced expression of certain proteins, indicating a metabolic shift towards utilizing the full tricarboxylic acid (TCA) cycle over the glyoxylate shunt. These changes correlated with heightened polyhydroxybutyrate accumulation and superoxide dismutase activity, potentially contributing to prolonged survival in biofilms. The interplay between protein expression and metabolic responses highlights the intricate mechanisms influencing B. pertussis growth and adaptation. Despite a less negative energy profile compared to host tissues like the human respiratory system, B. pertussis efficiently couples biosynthesis with catabolism, sustaining robust growth even after extended incubation periods. Host species Humans are the only host species of B. pertussis. Outbreaks of whooping cough have been observed among chimpanzees in a zoo, and wild gorillas; in both cases, it is considered likely that the infection was acquired as a result of close contact with humans. Several zoos have a long-standing custom of vaccinating their primates against whooping cough. Research shows that some primate species are highly sensitive to B. pertussis, and developed a clinical whooping cough in high incidence when exposed to low inoculation doses. Whether the bacteria spread naturally in wild animal populations has not been confirmed satisfactorily by laboratory diagnosis. In research settings, baboons have been used as a model of the infection although it is not known whether the pathology in baboons is the same as in humans. Pertussis Pertussis is an infection of the respiratory system characterized by a "whooping" sound when the person breathes in. B. pertussis infects its host by colonizing lung epithelial cells. The bacterium contains a surface protein, filamentous haemagglutinin adhesin, which binds to the sulfatides found on cilia of epithelial cells. Other adhesins are fimbriae and petractin. Once anchored, the bacterium produces tracheal cytotoxin, which stops the cilia from beating. This prevents the cilia from clearing debris from the lungs, so the body responds by sending the host into a coughing fit. B. pertussis can inhibit the function of the host's immune system. The toxin, known as pertussis toxin, inhibits G protein coupling that regulates an adenylate cyclase-mediated conversion of ATP to cyclic adenosine monophosphate. The result is that phagocytes convert too much adenosine triphosphate to cyclic adenosine monophosphate, causing disturbances in cellular signaling mechanisms, and preventing phagocytes from correctly responding to the infection. Pertussis toxin, formerly known as lymphocytosis-promoting factor, causes a decrease in the entry of lymphocytes into lymph nodes, which can lead to a condition known as lymphocytosis, with a complete lymphocyte count of over 4000/μl in adults or over 8000/μl in children. Besides targeting lymphocytes, it limits neutrophil migration to the lungs. It also decreases the function of tissue-resident macrophages, which are responsible for some bacterial clearance. The infection of B. pertussis occurs mostly in children under the age of one since this is when they are unimmunized, or children with faded immunity, normally around the ages 11 through 18. The signs and symptoms are similar to a common cold: runny nose, sneezing, mild cough, and low-grade fever. The patient becomes most contagious during the catarrhal stage of infection, normally two weeks after the coughing begins. It may become airborne when the person coughs, sneezes, or laughs. The paroxysmal cough precedes a crowing inspiratory sound characteristic of pertussis. After a spell, the patient might make a "whooping" sound when breathing in or may vomit. Transmission rates are expected to rise as the host experiences their most contagious stage when the total viable count of B. pertussis is at its highest. After the host coughs, the bacteria in their respiratory airways will be exposed to the air by way of aerosolized droplets, threatening nearby humans. A human host can exhibit a range of physical reactions as a result of the  B. pertussis pathogen, depending on how well their body is equipped to fight infection. Adults have milder symptoms, such as prolonged coughing without the "whoop". Infants less than six months also may not have the typical whoop. A coughing spell may last a minute or more, producing cyanosis, apnea, and seizures. Transmission and infection B. pertussis is a highly contagious infection of the respiratory tract. However, for B. pertussis to persist in a population the bacterium needs an uninterrupted chain of transmission as there are no animal reservoirs and the bacteria do not survive in the environment. B. pertussis primarily spreads through respiratory droplets, requiring direct contact between individuals due to its short survival time outside the body. It was noted that between 1991 and 2008, there were 258 deaths for infants 8 months old and younger. Progression of disease Pertussis manifests in three distinct stages. The dynamic progression of pertussis, characterized by its distinct phases from incubation to paroxysmal coughing, underscores the complexity of the disease's clinical manifestations and highlights the potential significance of toxin release in driving symptoms. Following exposure, an incubation period of 5–7 days ensues before symptoms appear. The catarrhal phase follows, characterized by cold-like symptoms lasting about a week, with a high isolation rate of the organism. This phase transitions into the paroxysmal phase, where the dry cough evolves into a severe, paroxysmal cough with mucous secretion and vomiting. The coughing fits, characterized by efforts to expel respiratory secretions, may result in a distinctive whooping sound. Recovery of the organism diminishes significantly during this phase. Although the organism is seldom detected in the blood, it is theorized that the clinical symptoms primarily stem from toxin release. The paroxysmal phase typically persists for a minimum of 2 weeks. Diagnosis A nasopharyngeal swab or aspirate can be sent to the bacteriology laboratory for Gram stain (Gram-negative, coccobacilli, diplococci arrangement), with growth on Bordet–Gengou agar or buffered charcoal yeast extract agar with added cephalosporin to select for the organism, which shows mercury drop-like colonies. Endotracheal tube aspirates or bronchoalveolar lavage fluids are preferred for laboratory diagnostics due to their direct contact with the ciliated epithelial cells and higher isolation rates of the pathogen. Laboratory diagnostic methods used to identify B. pertussis: Serology Identification of specific agglutinating antibodies in the patient's blood serum with a high sensitivity and specificity rate. Able to detect the level of virulence and measure the immune response to the pathogen. Recommend those corresponding to the catarrhal phase of the illness. Not used in infants due to delay of positive results, often indicating the disease has progressed. Sparked the development of ELISA kits. Microbiological culture Known for high specificity, the ability to subtype the colonies presented, and limited sensitivity. Ideal for antimicrobial-resistant monitoring. Specificity results can be affected by age, immunization status, duration of symptoms, and even specimen handling. It is very difficult to cultivate separate pathogens and only high bacterial loads can lead to a positive culture. The ideal stage for isolation is the catarrhal stage or the beginning of the paroxysmal stage. Vaccinated persons also have a lower rate of isolation. Plates are incubated at 36 °C under high humidity for 7–10 days before obtaining results. Classical PCR assay Being the test of choice, this procedure is known for its quick and high sensitivity, however; often inaccurate when identifying between Bordetella species. The primers used for PCR usually target the transposable elements IS481 and IS1001. Recommend to be performed on infants and those corresponding to the catarrhal phase of the illness. It can detect the pathogens in atypical manifestations and vaccinated patients for longer periods, compared to the culture. Target genes within B. pertussis are IS481, IS1002, ptxS1, Ptx-Pr, and BP3385, however, B. bronchisepticaand B. holmesii contain similar gene expression, leaving it difficult to differentiate between the bacterium in the laboratory. The most effective technique to differentiate between the two bacteria is by human and animal isolates. Singleplex PCR identifies the target gene ptxS1. Direct Fluorescent Antibody Testing (DFA) Inexpensive and direct results of Bordetella detection with poor sensitivity and specificity. This test stains the nasopharyngeal secretions with a fluorescent modified antibody that binds directly to the B. pertussis or B. parapertussis bacteria. If positive, the binding antibody would glow under the microscope. Because of the low specificity, it is common to receive false positives with polyclonal antibodies occurring. Several diagnostic tests are available, particularly the enzyme-linked immunosorbent assay ELISA kits. These are designed to detect filamentous hemagglutinin (FHA) and/or anti-pertussis-toxin antibodies of IgG, IgA, or IgM. Some kits use a combination of antigens which leads to a higher sensitivity, but might also make the interpretation of the results harder since one cannot know which antibody has been detected. Misdiagnosis is common due to diagnostic techniques, misidentification between species in laboratories, and clinician error.  The misdiagnoses between Bordetella species further increase the likelihood of antibiotic resistance. These factors highlight the need for a procedure to target all species through specific and fast methods. Treatment and prevention Treatment Whooping cough is treated by macrolides, for example erythromycin. The therapy is most effective when started during the incubation period or the catarrhal period. It is ideal for treatment should be within 1–2 weeks from onset of symptoms. When applied during the paroxysmal cough phase, the time of convalescence is not affected, only further transmission is reduced to 5–10 days after infection. Prevention Pertussis vaccine has been widely used since the second half of the 20th century. The first vaccines were whole-cell vaccines (wP), composed of chemically inactivated bacteria and given intramuscularly. When given, the inactive bacteria and antigens trigger the immune response and mimic natural infection. Due to the frequent reports of reactions at the injection site, scientists started to replace whole-cell vaccines with acellular pertussis (aP) vaccines which have, recently, shown a decreased time of immunity and level of protection against colonization. These acellular vaccines are also intramuscular and are composed of purified surface antigens, mainly fimbriae, filamentous hemagglutinin, pertactin and pertussis toxin. Both vaccines are still used today, with the aP vaccine predominantly used in developed countries. The aP vaccine is also a part of the diphtheria, tetanus, and acellular pertussis (DTaP) immunization. Those being administered these vaccines are recommended to receive boosters as they only afford protection for about 4–12 years; while natural infection offers 7–20 years. Cases in infants are common and often have serious impacts as they are more susceptible to Bordetella pertussis than adolescents and healthy adults. Therefore, to decrease the likelihood of contracting and spreading this disease, parents are recommended to receive the preventative vaccine. With the resurgence of pertussis cases, there are concerns regarding the level of protection provided by the current vaccine. This vaccine does not offer protection against other species of Bordetella such as B. holmesii and B. bronchiseptica and further highlights the need for a revamped vaccine. Research is currently developing a novel vaccine such as the BPZE1, which is a live attenuated vaccine against B. pertussis and challenges the other pathogens in the 'Classical Bordetellae'. This new vaccine inactivates the gene encoding 3 major toxins with only a single intranasal dose. It is currently being studied for safety in immunocompromised patients and pregnant women. Other promising vaccines are under study and in trial periods for accuracy, efficacy, and safety.
Biology and health sciences
Gram-negative bacteria
Plants
4269567
https://en.wikipedia.org/wiki/Dog
Dog
The dog (Canis familiaris or Canis lupus familiaris) is a domesticated descendant of the wolf. Also called the domestic dog, it was selectively bred from an extinct population of wolves during the Late Pleistocene by hunter-gatherers. The dog was the first species to be domesticated by humans, over 14,000 years ago and before the development of agriculture. Experts estimate that due to their long association with humans, dogs have gained the ability to thrive on a starch-rich diet that would be inadequate for other canids. Dogs have been bred for desired behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They have the same number of bones (with the exception of the tail), powerful jaws that house around 42 teeth, and well-developed senses of smell, hearing, and sight. Compared to humans, dogs have an inferior visual acuity, a superior sense of smell, and a relatively large olfactory cortex. They perform many roles for humans, such as hunting, herding, pulling loads, protection, companionship, therapy, aiding disabled people, and assisting police and the military. Communication in dogs includes eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). They mark their territories by urinating on them, which is more likely when entering a new environment. Over the millennia, dogs became uniquely adapted to human behavior; this adaptation includes being able to understand and communicate with humans. As such, the human–canine bond has been a topic of frequent study, and dogs' influence on human society has given them the sobriquet of "man's best friend". The global dog population is estimated at 700 million to 1 billion, distributed around the world. The dog is the most popular pet in the United States, present in 34–40% of households. Developed countries make up approximately 20% of the global dog population, while around 75% of dogs are estimated to be from developing countries, mainly in the form of feral and community dogs. Taxonomy Dogs are domesticated members of the family Canidae. They are classified as a subspecies of Canis lupus, along with wolves and dingoes. Dogs were domesticated from wolves over 14,000 years ago by hunter-gatherers, before the development of agriculture. The remains of the Bonn–Oberkassel dog, buried alongside humans between 14,000 and 15,000 years ago, are the earliest to be conclusively identified as a domesticated dog. Genetic studies show that dogs likely diverged from wolves between 27,000 and 40,000 years ago. The dingo and the related New Guinea singing dog resulted from the geographic isolation and feralization of dogs in Oceania over 8,000 years ago. Dogs, wolves, and dingoes have sometimes been classified as separate species. In 1758, the Swedish botanist and zoologist Carl Linnaeus assigned the genus name Canis (which is the Latin word for "dog") to the domestic dog, the wolf, and the golden jackal in his book, Systema Naturae. He classified the domestic dog as Canis familiaris and, on the next page, classified the grey wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its upturning tail (cauda recurvata in Latin term), which is not found in any other canid. In the 2005 edition of Mammal Species of the World, mammalogist W. Christopher Wozencraft listed the wolf as a wild subspecies of Canis lupus and proposed two additional subspecies: familiaris, as named by Linnaeus in 1758, and dingo, named by Meyer in 1793. Wozencraft included hallstromi (the New Guinea singing dog) as another name (junior synonym) for the dingo. This classification was informed by a 1999 mitochondrial DNA study. The classification of dingoes is disputed and a political issue in Australia. Classifying dingoes as wild dogs simplifies reducing or controlling dingo populations that threaten livestock. Treating dingoes as a separate species allows conservation programs to protect the dingo population. Dingo classification affects wildlife management policies, legislation, and societal attitudes. In 2019, a workshop hosted by the IUCN/Species Survival Commission's Canid Specialist Group considered the dingo and the New Guinea singing dog to be feral Canis familiaris. Therefore, it did not assess them for the IUCN Red List of threatened species. Domestication The earliest remains generally accepted to be those of a domesticated dog were discovered in Bonn-Oberkassel, Germany. Contextual, isotopic, genetic, and morphological evidence shows that this dog was not a local wolf. The dog was dated to 14,223 years ago and was found buried along with a man and a woman, all three having been sprayed with red hematite powder and buried under large, thick basalt blocks. The dog had died of canine distemper. This timing indicates that the dog was the first species to be domesticated in the time of hunter-gatherers, which predates agriculture. Earlier remains dating back to 30,000 years ago have been described as Paleolithic dogs, but their status as dogs or wolves remains debated because considerable morphological diversity existed among wolves during the Late Pleistocene. DNA sequences show that all ancient and modern dogs share a common ancestry and descended from an ancient, extinct wolf population that was distinct from any modern wolf lineage. Some studies have posited that all living wolves are more closely related to each other than to dogs, while others have suggested that dogs are more closely related to modern Eurasian wolves than to American wolves. The dog is a domestic animal that likely travelled a commensal pathway into domestication (i.e. humans initially neither benefitted nor were harmed by wild dogs eating refuse from their camps). The questions of when and where dogs were first domesticated remains uncertain. Genetic studies suggest a domestication process commencing over 25,000 years ago, in one or several wolf populations in either Europe, the high Arctic, or eastern Asia. In 2021, a literature review of the current evidence infers that the dog was domesticated in Siberia 23,000 years ago by ancient North Siberians, then later dispersed eastward into the Americas and westward across Eurasia, with dogs likely accompanying the first humans to inhabit the Americas. Some studies have suggested that the extinct Japanese wolf is closely related to the ancestor of domestic dogs. In 2018, a study identified 429 genes that differed between modern dogs and modern wolves. As the differences in these genes could also be found in ancient dog fossils, these were regarded as being the result of the initial domestication and not from recent breed formation. These genes are linked to neural crest and central nervous system development. These genes affect embryogenesis and can confer tameness, smaller jaws, floppy ears, and diminished craniofacial development, which distinguish domesticated dogs from wolves and are considered to reflect domestication syndrome. The study concluded that during early dog domestication, the initial selection was for behavior. This trait is influenced by those genes which act in the neural crest, which led to the phenotypes observed in modern dogs. Breeds There are around 450 official dog breeds, the most of any mammal. Dogs began diversifying in the Victorian era, when humans took control of their natural selection. Most breeds were derived from small numbers of founders within the last 200 years. Since then, dogs have undergone rapid phenotypic change and have been subjected to artificial selection by humans. The skull, body, and limb proportions between breeds display more phenotypic diversity than can be found within the entire order of carnivores. These breeds possess distinct traits related to morphology, which include body size, skull shape, tail phenotype, fur type, and colour. As such, humans have long used dogs for their desirable traits to complete or fulfill a certain work or role. Their behavioural traits include guarding, herding, hunting, retrieving, and scent detection. Their personality traits include hypersocial behavior, boldness, and aggression. Present-day dogs are dispersed around the world. An example of this dispersal is the numerous modern breeds of European lineage during the Victorian era. Anatomy and physiology Size and skeleton Dogs are extremely variable in size, ranging from one of the largest breeds, the Great Dane, at and , to one of the smallest, the Chihuahua, at and . All healthy dogs, regardless of their size and type, have the same amount of bones (with the exception of the tail), although there is significant skeletal variation between dogs of different types. The dog's skeleton is well adapted for running; the vertebrae on the neck and back have extensions for back muscles, consisting of epaxial muscles and hypaxial muscles, to connect to; the long ribs provide room for the heart and lungs; and the shoulders are unattached to the skeleton, allowing for flexibility. Compared to the dog's wolf-like ancestors, selective breeding since domestication has seen the dog's skeleton increase in size for larger types such as mastiffs and miniaturised for smaller types such as terriers; dwarfism has been selectively bred for some types where short legs are preferred, such as dachshunds and corgis. Most dogs naturally have 26 vertebrae in their tails, but some with naturally short tails have as few as three. The dog's skull has identical components regardless of breed type, but there is significant divergence in terms of skull shape between types. The three basic skull shapes are the elongated dolichocephalic type as seen in sighthounds, the intermediate mesocephalic or mesaticephalic type, and the very short and broad brachycephalic type exemplified by mastiff type skulls. The jaw contains around 42 teeth, and it has evolved for the consumption of flesh. Dogs use their carnassial teeth to cut food into bite-sized chunks, more especially meat. Senses Dogs' senses include vision, hearing, smell, taste, touch, and magnetoreception. One study suggests that dogs can feel small variations in Earth's magnetic field. Dogs prefer to defecate with their spines aligned in a north–south position in calm magnetic field conditions. Dogs' vision is dichromatic; their visual world consists of yellows, blues, and grays. They have difficulty differentiating between red and green, and much like other mammals, the dog's eye is composed of two types of cone cells compared to the human's three. The divergence of the eye axis of dogs ranges from 12 to 25°, depending on the breed, which can have different retina configurations. The fovea centralis area of the eye is attached to a nerve fiber, and is the most sensitive to photons. Additionally, a study found that dogs' visual acuity was up to eight times less effective than a human, and their ability to discriminate levels of brightness was about two times worse than a human. While the human brain is dominated by a large visual cortex, the dog brain is dominated by a large olfactory cortex. Dogs have roughly forty times more smell-sensitive receptors than humans, ranging from about 125million to nearly 300million in some dog breeds, such as bloodhounds. This sense of smell is the most prominent sense of the species; it detects chemical changes in the environment, allowing dogs to pinpoint the location of mating partners, potential stressors, resources, etc. Dogs also have an acute sense of hearing up to four times greater than that of humans. They can pick up the slightest sounds from about compared to for humans. Dogs have stiff, deeply embedded hairs known as whiskers that sense atmospheric changes, vibrations, and objects not visible in low light conditions. The lower most part of whiskers hold more receptor cells than other hair types, which help in alerting dogs of objects that could collide with the nose, ears, and jaw. Whiskers likely also facilitate the movement of food towards the mouth. Coat The coats of domestic dogs are of two varieties: "double" being common in dogs (as well as wolves) originating from colder climates, made up of a coarse guard hair and a soft down hair, or "single", with the topcoat only. Breeds may have an occasional "blaze", stripe, or "star" of white fur on their chest or underside. Premature graying can occur in dogs as early as one year of age; this is associated with impulsive behaviors, anxiety behaviors, and fear of unfamiliar noise, people, or animals. Some dog breeds are hairless, while others have a very thick corded coat. The coats of certain breeds are often groomed to a characteristic style, for example, the Yorkshire Terrier's "show cut". Dewclaw A dog's dewclaw is the fifth digit in its forelimb and hind legs. Dewclaws on the forelimbs are attached by bone and ligament, while the dewclaws on the hind legs are attached only by skin. Most dogs aren't born with dewclaws in their hind legs, and some are without them in their forelimbs. Dogs' dewclaws consist of the proximal phalanges and distal phalanges. Some publications theorize that dewclaws in wolves, who usually do not have dewclaws, were a sign of hybridization with dogs. Tail A dog's tail is the terminal appendage of the vertebral column, which is made up of a string of 5 to 23 vertebrae enclosed in muscles and skin that support the dog's back extensor muscles. One of the primary functions of a dog's tail is to communicate their emotional state. The tail also helps the dog maintain balance by putting its weight on the opposite side of the dog's tilt, and it can also help the dog spread its anal gland's scent through the tail's position and movement. Dogs can have a violet gland (or supracaudal gland) characterized by sebaceous glands on the dorsal surface of their tails; in some breeds, it may be vestigial or absent. The enlargement of the violet gland in the tail, which can create a bald spot from hair loss, can be caused by Cushing's disease or an excess of sebum from androgens in the sebaceous glands. A study suggests that dogs show asymmetric tail-wagging responses to different emotive stimuli. "Stimuli that could be expected to elicit approach tendencies seem to be associated with [a] higher amplitude of tail-wagging movements to the right side". Dogs can injure themselves by wagging their tails forcefully; this condition is called kennel tail, happy tail, bleeding tail, or splitting tail. In some hunting dogs, the tail is traditionally docked to avoid injuries. Some dogs can be born without tails because of a DNA variant in the T gene, which can also result in a congenitally short (bobtail) tail. Tail docking is opposed by many veterinary and animal welfare organisations such as the American Veterinary Medical Association and the British Veterinary Association. Evidence from veterinary practices and questionnaires showed that around 500 dogs would need to have their tail docked to prevent one injury. Health Numerous disorders have been known to affect dogs. Some are congenital and others are acquired. Dogs can acquire upper respiratory tract diseases including diseases that affect the nasal cavity, the larynx, and the trachea; lower respiratory tract diseases which includes pulmonary disease and acute respiratory diseases; heart diseases which includes any cardiovascular inflammation or dysfunction of the heart; haemopoietic diseases including anaemia and clotting disorders; gastrointestinal disease such as diarrhoea and gastric dilatation volvulus; hepatic disease such as portosystemic shunts and liver failure; pancreatic disease such as pancreatitis; renal disease; lower urinary tract disease such as cystitis and urolithiasis; endocrine disorders such as diabetes mellitus, Cushing's syndrome, hypoadrenocorticism, and hypothyroidism; nervous system diseases such as seizures and spinal injury; musculoskeletal disease such as arthritis and myopathies; dermatological disorders such as alopecia and pyoderma; ophthalmological diseases such as conjunctivitis, glaucoma, entropion, and progressive retinal atrophy; and neoplasia. Common dog parasites are lice, fleas, fly larvae, ticks, mites, cestodes, nematodes, and coccidia. Taenia is a notable genus with 5 species in which dogs are the definitive host. Additionally, dogs are a source of zoonoses for humans. They are responsible for 99% of rabies cases worldwide; however, in some developed countries such as the UK, rabies is absent from dogs and is instead only transmitted by bats. Other common zoonoses are hydatid disease, leptospirosis, pasteurellosis, ringworm, and toxocariasis. Common infections in dogs include canine adenovirus, canine distemper virus, canine parvovirus, leptospirosis, canine influenza, and canine coronavirus. All of these conditions have vaccines available. Dogs are the companion animal most frequently reported for exposure to toxins. Most poisonings are accidental and over 80% of reports of exposure to the ASPCA animal poisoning hotline are due to oral exposure. The most common substances people report exposure to are: pharmaceuticals, toxic foods, and rodenticides. Data from the Pet Poison Helpline shows that human drugs are the most frequent cause of toxicosis death. The most common household products ingested are cleaning products. Most food related poisonings involved theobromine poisoning (chocolate). Other common food poisonings include xylitol, Vitis (grapes, raisins, etc.) and Allium (garlic, onions, etc.). Pyrethrin insecticides were the most common cause of pesticide poisoning. Metaldehyde a common pesticide for snails and slugs typically causes severe outcomes when ingested by dogs. Neoplasia is the most common cause of death for dogs. Other common causes of death are heart and renal failure. Their pathology is similar to that of humans, as is their response to treatment and their outcomes. Genes found in humans to be responsible for disorders are investigated in dogs as being the cause and vice versa. Lifespan The typical lifespan of dogs varies widely among breeds, but the median longevity (the age at which half the dogs in a population have died and half are still alive) is approximately 12.7 years. Obesity correlates negatively with longevity with one study finding obese dogs to have a life expectancy approximately a year and a half less than dogs with a healthy weight. In a 2024 UK study analyzing 584,734 dogs, it was concluded that purebred dogs lived longer than crossbred dogs, challenging the previous notion of the latter having the higher life expectancies. The authors noted that their study included "designer dogs" as crossbred and that purebred dogs were typically given better care than their crossbred counterparts, which likely influenced the outcome of the study. Other studies also show that fully mongrel dogs live about a year longer on average than dogs with pedigrees. Furthermore, small dogs with longer muzzles have been shown to have higher lifespans than larger medium-sized dogs with much more depressed muzzles. For free-ranging dogs, less than 1 in 5 reach sexual maturity, and the median life expectancy for feral dogs is less than half of dogs living with humans. Reproduction In domestic dogs, sexual maturity happens around six months to one year for both males and females, although this can be delayed until up to two years of age for some large breeds. This is the time at which female dogs will have their first estrous cycle, characterized by their vulvas swelling and producing discharges, usually lasting between 4 and 20 days. They will experience subsequent estrous cycles semiannually, during which the body prepares for pregnancy. At the peak of the cycle, females will become estrous, mentally and physically receptive to copulation. Because the ova survive and can be fertilized for a week after ovulation, more than one male can sire the same litter. Fertilization typically occurs two to five days after ovulation. After ejaculation, the dogs are coitally tied for around 5–30 minutes because of the male's bulbus glandis swelling and the female's constrictor vestibuli contracting; the male will continue ejaculating until they untie naturally due to muscle relaxation. 14–16 days after ovulation, the embryo attaches to the uterus, and after seven to eight more days, a heartbeat is detectable. Dogs bear their litters roughly 58 to 68 days after fertilization, with an average of 63 days, although the length of gestation can vary. An average litter consists of about six puppies. Neutering Neutering is the sterilization of animals via gonadectomy, which is an orchidectomy (castration) in dogs and ovariohysterectomy (spay) in bitches. Neutering reduces problems caused by hypersexuality, especially in male dogs. Spayed females are less likely to develop cancers affecting the mammary glands, ovaries, and other reproductive organs. However, neutering increases the risk of urinary incontinence in bitches, prostate cancer in dogs, and osteosarcoma, hemangiosarcoma, cruciate ligament rupture, pyometra, obesity, and diabetes mellitus in either sex. Neutering is the most common surgical procedure in dogs less than a year old in the US and is seen as a control method for overpopulation. Neutering often occurs as early as 6–14 weeks in shelters in the US. The American Society for the Prevention of Cruelty to Animals (ASPCA) advises that dogs not intended for further breeding should be neutered so that they do not have undesired puppies that may later be euthanized. However, the Society for Theriogenology and the American College of Theriogenologists made a joint statement that opposes mandatory neutering; they said that the cause of overpopulation in the US is cultural. Neutering is less common in most European countries, especially in Nordic countries—except for the UK, where it is common. In Norway, neutering is illegal unless for the benefit of the animal's health (e.g., ovariohysterectomy in case of ovarian or uterine neoplasia). Some European countries have similar laws to Norway, but their wording either explicitly allows for neutering for controlling reproduction or it is allowed in practice or by contradiction through other laws. Italy and Portugal have passed recent laws that promote it. Germany forbids early age neutering, but neutering is still allowed at the usual age. In Romania, neutering is mandatory except for when a pedigree to select breeds can be shown. Inbreeding depression A common breeding practice for pet dogs is to mate them between close relatives (e.g., between half- and full-siblings). In a study of seven dog breeds (the Bernese Mountain Dog, Basset Hound, Cairn Terrier, Brittany, German Shepherd Dog, Leonberger, and West Highland White Terrier), it was found that inbreeding decreases litter size and survival. Another analysis of data on 42,855 Dachshund litters found that as the inbreeding coefficient increased, litter size decreased and the percentage of stillborn puppies increased, thus indicating inbreeding depression. In a study of Boxer litters, 22% of puppies died before reaching 7 weeks of age. Stillbirth was the most frequent cause of death, followed by infection. Mortality due to infection increased significantly with increases in inbreeding. Behavior Dog behavior has been shaped by millennia of contact with humans. They have acquired the ability to understand and communicate with humans and are uniquely attuned to human behaviors. Behavioral scientists suggest that a set of social-cognitive abilities in domestic dogs that are not possessed by the dog's canine relatives or other highly intelligent mammals, such as great apes, are parallel to children's social-cognitive skills. Most domestic animals were initially bred for the production of goods. Dogs, on the other hand, were selectively bred for desirable behavioral traits. In 2016, a study found that only 11 fixed genes showed variation between wolves and dogs. These gene variations indicate the occurrence of artificial selection and the subsequent divergence of behavior and anatomical features. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight response (i.e., selection for tameness) and emotional processing. Compared to their wolf counterparts, dogs tend to be less timid and less aggressive, though some of these genes have been associated with aggression in certain dog breeds. Traits of high sociability and lack of fear in dogs may include genetic modifications related to Williams-Beuren syndrome in humans, which cause hypersociability at the expense of problem-solving ability. In a 2023 study of 58 dogs, some dogs classified as attention deficit hyperactivity disorder-like showed lower serotonin and dopamine concentrations. A similar study claims that hyperactivity is more common in male and young dogs. A dog can become aggressive because of trauma or abuse, fear or anxiety, territorial protection, or protecting an item it considers valuable. Acute stress reactions from post-traumatic stress disorder (PTSD) seen in dogs can evolve into chronic stress. Police dogs with PTSD can often refuse to work. Dogs have a natural instinct called prey drive (the term is chiefly used to describe training dogs' habits) which can be influenced by breeding. These instincts can drive dogs to consider objects or other animals to be prey or drive possessive behavior. These traits have been enhanced in some breeds so that they may be used to hunt and kill vermin or other pests. Puppies or dogs sometimes bury food underground. One study found that wolves outperformed dogs in finding food caches, likely due to a "difference in motivation" between wolves and dogs. Some puppies and dogs engage in coprophagy out of habit, stress, for attention, or boredom; most of them will not do it later in life. A study hypothesizes that the behavior was inherited from wolves, a behavior likely evolved to lessen the presence of intestinal parasites in dens. Most dogs can swim. In a study of 412 dogs, around 36.5% of the dogs could not swim; the other 63.5% were able to swim without a trainer in a swimming pool. A study of 55 dogs found a correlation between swimming and 'improvement' of the hip osteoarthritis joint. Nursing The female dog may produce colostrum, a type of milk high in nutrients and antibodies, 1–7 days before giving birth. Milk production lasts for around three months, and increases with litter size. The dog can sometimes vomit and refuse food during child contractions. In the later stages of the dog's pregnancy, nesting behaviour may occur. Puppies are born with a protective fetal membrane that the mother usually removes shortly after birth. Dogs can have the maternal instincts to start grooming their puppies, consume their puppies' feces, and protect their puppies, likely due to their hormonal state. While male-parent dogs can show more disinterested behaviour toward their own puppies, most can play with the young pups as they would with other dogs or humans. A female dog may abandon or attack her puppies or her male partner dog if she is stressed or in pain. Intelligence Researchers have tested dogs' ability to perceive information, retain it as knowledge, and apply it to solve problems. Studies of two dogs suggest that dogs can learn by inference. A study with Rico, a Border Collie, showed that he knew the labels of over 200 different items. He inferred the names of novel things by exclusion learning and correctly retrieved those new items after four weeks of the initial exposure. A study of another Border Collie, Chaser, documented that he had learned the names and could associate them by verbal command with over 1,000 words. One study of canine cognitive abilities found that dogs' capabilities are similar to those of horses, chimpanzees, or cats. One study of 18 household dogs found that the dogs could not distinguish food bowls at specific locations without distinguishing cues; the study stated that this indicates a lack of spatial memory. A study stated that dogs have a visual sense for number. The dogs showed a ratio-dependent activation both for numerical values from 1–3 to larger than four. Dogs demonstrate a theory of mind by engaging in deception. Another experimental study showed evidence that Australian dingos can outperform domestic dogs in non-social problem-solving, indicating that domestic dogs may have lost much of their original problem-solving abilities once they joined humans. Another study showed that dogs stared at humans after failing to complete an impossible version of the same task they had been trained to solve. Wolves, under the same situation, avoided staring at humans altogether. Communication Dog communication is the transfer of information between dogs, as well as between dogs and humans. Communication behaviors of dogs include eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). Dogs mark their territories by urinating on them, which is more likely when entering a new environment. Both sexes of dogs may also urinate to communicate anxiety or frustration, submissiveness, or when in exciting or relaxing situations. Aroused dogs can be a result of the dogs' higher cortisol levels. Dogs begin socializing with other dogs by the time they reach the ages of 3 to 8 weeks, and at about 5 to 12 weeks of age, they alter their focus from dogs to humans. Belly exposure in dogs can be a defensive behavior that can lead to a bite or to seek comfort. Humans communicate with dogs by using vocalization, hand signals, and body posture. With their acute sense of hearing, dogs rely on the auditory aspect of communication for understanding and responding to various cues, including the distinctive barking patterns that convey different messages. A study using functional magnetic resonance imaging (fMRI) has shown that dogs respond to both vocal and nonvocal voices using the brain's region towards the temporal pole, similar to that of humans' brains. Most dogs also looked significantly longer at the face whose expression matched the valence of vocalization. A study of caudate responses shows that dogs tend to respond more positively to social rewards than to food rewards. Ecology Population The dog is the most widely abundant large carnivoran living in the human environment. In 2020, the estimated global dog population was between 700 million and 1 billion. In the same year, a study found the dog to be the most popular pet in the United States, as they were present in 34 out of every 100 homes. About 20% of the dog population live in developed countries. In the developing world, it is estimated that three-quarters of the world's dog population lives in the developing world as feral, village, or community dogs. Most of these dogs live as scavengers and have never been owned by humans, with one study showing that village dogs' most common response when approached by strangers is to run away (52%) or respond aggressively (11%). Competitors Feral and free-ranging dogs' potential to compete with other large carnivores is limited by their strong association with humans. Although wolves are known to kill dogs, wolves tend to live in pairs in areas where they are highly persecuted, giving them a disadvantage when facing large dog groups. In some instances, wolves have displayed an uncharacteristic fearlessness of humans and buildings when attacking dogs, to the extent that they have to be beaten off or killed. Although the numbers of dogs killed each year are relatively low, there is still a fear among humans of wolves entering villages and farmyards to take dogs, and losses of dogs to wolves have led to demands for more liberal wolf hunting regulations. Coyotes and big cats have also been known to attack dogs. In particular, leopards are known to have a preference for dogs and have been recorded to kill and consume them, no matter their size. Siberian tigers in the Amur river region have killed dogs in the middle of villages. They will not tolerate wolves as competitors within their territories, and the tigers could be considering dogs in the same way. Striped hyenas are known to kill dogs in their range. Dogs as introduced predators have affected the ecology of New Zealand, which lacked indigenous land-based mammals before human settlement. Dogs have made 11 vertebrate species extinct and are identified as a 'potential threat' to at least 188 threatened species worldwide. Dogs have also been linked to the extinction of 156 animal species. Dogs have been documented to have killed a few birds of the endangered species, the kagu, in New Caledonia. Diet Dogs are typically described as omnivores. Compared to wolves, dogs from agricultural societies have extra copies of amylase and other genes involved in starch digestion that contribute to an increased ability to thrive on a starch-rich diet. Similar to humans, some dog breeds produce amylase in their saliva and are classified as having a high-starch diet. Despite being an omnivore, dogs are only able to conjugate bile acid with taurine. They must get vitamin D from their diet. Of the twenty-one amino acids common to all life forms (including selenocysteine), dogs cannot synthesize ten: arginine, histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Like cats, dogs require arginine to maintain nitrogen balance. These nutritional requirements place dogs halfway between carnivores and omnivores. Range As a domesticated or semi-domesticated animal, the dog has notable exceptions of presence in: The Aboriginal Tasmanians, who were separated from Australia before the arrival of dingos on that continent The Andamanese peoples, who were isolated when rising sea levels covered the land bridge to Myanmar The Fuegians, who instead domesticated the Fuegian dog, an already extinct different canid species Individual Pacific islands whose maritime settlers did not bring dogs or where the dogs died out after original settlement, notably the Mariana Islands, Palau and most of the Caroline Islands with exceptions such as Fais Island and Nukuoro, the Marshall Islands, the Gilbert Islands, New Caledonia, Vanuatu, Tonga, Marquesas, Mangaia in the Cook Islands, Rapa Iti in French Polynesia, Easter Island, the Chatham Islands, and Pitcairn Island (settled by the Bounty mutineers, who killed off their dogs to escape discovery by passing ships). Dogs were introduced to Antarctica as sled dogs. Starting practice in December 1993, dogs were later outlawed by the Protocol on Environmental Protection to the Antarctic Treaty international agreement due to the possible risk of spreading infections. Roles with humans The domesticated dog originated as a predator and scavenger. They inherited complex behaviors, such as bite inhibition, from their wolf ancestors, which would have been pack hunters with complex body language. These sophisticated forms of social cognition and communication may account for dogs' trainability, playfulness, and ability to fit into human households and social situations, and probably also their co-existence with early human hunter-gatherers. Dogs perform many roles for people, such as hunting, herding, pulling loads, protection, assisting police and the military, companionship, and aiding disabled individuals. These roles in human society have earned them the nickname "man's best friend" in the Western world. In some cultures, however, dogs are also a source of meat. Pets The keeping of dogs as companions, particularly by elites, has a long history. Pet-dog populations grew significantly after World War II as suburbanization increased. In the 1980s, there have been changes in the pet dog's functions, such as the increased role of dogs in the emotional support of their human guardians. Within the second half of the 20th century, more and more dog owners considered their animal to be a part of the family. This major social status shift allowed the dog to conform to social expectations of personality and behavior. The second has been the broadening of the concepts of family and the home to include dogs-as-dogs within everyday routines and practices. Products such as dog-training books, classes, and television programs, target dog owners. Some dog-trainers have promoted a dominance model of dog-human relationships. However, the idea of the "alpha dog" trying to be dominant is based on a controversial theory about wolf packs. It has been disputed that "trying to achieve status" is characteristic of dog-human interactions. Human family members have increased participation in activities in which the dog is an integral partner, such as dog dancing and dog yoga. According to statistics published by the American Pet Products Manufacturers Association in the National Pet Owner Survey in 2009–2010, an estimated 77.5 million people in the United States have pet dogs. The source shows that nearly 40% of American households own at least one dog, of which 67% own just one dog, 25% own two dogs, and nearly 9% own more than two dogs. The data also shows an equal number of male and female pet dogs; less than one-fifth of the owned dogs come from shelters. Workers In addition to dogs' role as companion animals, dogs have been bred for herding livestock (such as collies and sheepdogs); for hunting; for rodent control (such as terriers); as search and rescue dogs; as detection dogs (such as those trained to detect illicit drugs or chemical weapons); as homeguard dogs; as police dogs (sometimes nicknamed "K-9"); as welfare-purpose dogs; as dogs who assist fishermen retrieve their nets; and as dogs that pull loads (such as sled dogs). In 1957, the dog Laika became one of the first animals to be launched into Earth orbit aboard the Soviets's Sputnik 2; Laika died during the flight from overheating. Various kinds of service dogs and assistance dogs, including guide dogs, hearing dogs, mobility assistance dogs, and psychiatric service dogs, assist individuals with disabilities. A study of 29 dogs found that 9 dogs owned by people with epilepsy were reported to exhibit attention-getting behavior to their handler 30 seconds to 45 minutes prior to an impending seizure; there was no significant correlation between the patients' demographics, health, or attitude towards their pets. Shows and sports Dogs compete in breed-conformation shows and dog sports (including racing, sledding, and agility competitions). In dog shows, also referred to as "breed shows", a judge familiar with the specific dog breed evaluates individual purebred dogs for conformity with their established breed type as described in a breed standard. Weight pulling, a dog sport involving pulling weight, has been criticized for promoting doping and for its risk of injury. Dogs as food Humans have consumed dog meat going back at least 14,000 years. It's unknown to what extent prehistoric dogs were consumed and bred for meat. For centuries, the practice was prevalent in Southeast Asia, East Asia, Africa, and Oceania before cultural changes triggered by the spread of religions resulted in dog meat consumption declining and becoming more taboo. Switzerland, Polynesia, and pre-Columbian Mexico historically consumed dog meat. Some Native American dogs, like the Peruvian Hairless Dog and Xoloitzcuintle, were raised to be sacrificed and eaten. Han Chinese traditionally ate dogs. Consumption of dog meat declined but did not end during the Sui dynasty (581–618) and Tang dynasty (618–907) due in part to the spread of Buddhism and the upper class rejecting the practice. Dog consumption was rare in India, Iran, and Europe. Eating dog meat is a social taboo in most parts of the world, though some still consume it in modern times. It is still consumed in some East Asian countries, including China, Vietnam, Korea, Indonesia, and the Philippines. An estimated 30 million dogs are killed and consumed in Asia every year. China is the world's largest consumer of dogs, with an estimated 10 to 20 million dogs killed every year for human consumption. In Vietnam, about 5 million dogs are slaughtered annually. In 2024, China, Singapore, and Thailand placed a ban on the consumption of dogs within their borders. In some parts of Poland and Central Asia, dog fat is reportedly believed to be beneficial for the lungs. Proponents of eating dog meat have argued that placing a distinction between livestock and dogs is Western hypocrisy and that there is no difference in eating different animals' meat. There is a long history of dog meat consumption in South Korea, but the practice has fallen out of favor. A 2017 survey found that under 40% of participants supported a ban on the distribution and consumption of dog meat. This increased to over 50% in 2020, suggesting changing attitudes, particularly among younger individuals. In 2018, the South Korean government passed a bill banning restaurants that sell dog meat from doing so during that year's Winter Olympics. On 9 January 2024, the South Korean parliament passed a law banning the distribution and sale of dog meat. It will take effect in 2027, with plans to assist dog farmers in transitioning to other products. The primary type of dog raised for meat in South Korea has been the Nureongi. In North Korea where meat is scarce, eating dog is a common and accepted practice, officially promoted by the government. Health risks In 2018, the World Health Organization (WHO) reported that 59,000 people died globally from rabies, with 59.6% of the deaths in Asia and 36.4% in Africa. Rabies is a disease for which dogs are the most significant vector. Dog bites affect tens of millions of people globally each year. The primary victims of dog bite incidents are children. They are more likely to sustain more serious injuries from bites, which can lead to death. Sharp claws can lacerate flesh and cause serious infections. In the United States, cats and dogs are a factor in more than 86,000 falls each year. It has been estimated that around 2% of dog-related injuries treated in U.K. hospitals are domestic accidents. The same study concluded that dog-associated road accidents involving injuries more commonly involve two-wheeled vehicles. Some countries and cities have also banned or restricted certain dog breeds, usually for safety concerns. Toxocara canis (dog roundworm) eggs in dog feces can cause toxocariasis. It is estimated that nearly 14% of people in the United States are infected with Toxocara; about 10,000 cases are reported each year. Untreated toxocariasis can cause retinal damage and decreased vision. Dog feces can also contain hookworms that cause cutaneous larva migrans in humans. Health benefits The scientific evidence is mixed as to whether a dog's companionship can enhance human physical and psychological well-being. Studies suggest that there are benefits to physical health and psychological well-being, but they have been criticized for being "poorly controlled". One study states that "the health of elderly people is related to their health habits and social supports but not to their ownership of, or attachment to, a companion animal". Earlier studies have shown that pet-dog or -cat guardians make fewer hospital visits and are less likely to be on medication for heart problems and sleeping difficulties than non-guardians. People with pet dogs took considerably more physical exercise than those with cats or those without pets; these effects are relatively long-term. Pet guardianship has also been associated with increased survival in cases of coronary artery disease. Human guardians are significantly less likely to die within one year of an acute myocardial infarction than those who do not own dogs. Studies have found a small to moderate correlation between dog-ownership and increased adult physical-activity levels. A 2005 paper by the British Medical Journal states: Recent research has failed to support earlier findings that pet ownership is associated with a reduced risk of cardiovascular disease, a reduced use of general practitioner services, or any psychological or physical benefits on health for community dwelling older people. Research has, however, pointed to significantly less absenteeism from school through sickness among children who live with pets. Health benefits of dogs can result from contact with dogs in general, not solely from having dogs as pets. For example, when in a pet dog's presence, people show reductions in cardiovascular, behavioral, and psychological indicators of anxiety and are exposed to immune-stimulating microorganisms, which can protect against allergies and autoimmune diseases (according to the hygiene hypothesis). Other benefits include dogs as social support. One study indicated that wheelchair-users experience more positive social interactions with strangers when accompanied by a dog than when they are not. In a 2015 study, it was found that having a pet made people more inclined to foster positive relationships with their neighbors. In one study, new guardians reported a significant reduction in minor health problems during the first month following pet acquisition, which was sustained through the 10-month study. Using dogs and other animals as a part of therapy dates back to the late-18th century, when animals were introduced into mental institutions to help socialize patients with mental disorders. Animal-assisted intervention research has shown that animal-assisted therapy with a dog can increase smiling and laughing among people with Alzheimer's disease. One study demonstrated that children with ADHD and conduct disorders who participated in an education program with dogs and other animals showed increased attendance, knowledge, and skill-objectives and decreased antisocial and violent behavior compared with those not in an animal-assisted program. Cultural importance Artworks have depicted dogs as symbols of guidance, protection, loyalty, fidelity, faithfulness, alertness, and love. In ancient Mesopotamia, from the Old Babylonian period until the Neo-Babylonian period, dogs were the symbol of Ninisina, the goddess of healing and medicine, and her worshippers frequently dedicated small models of seated dogs to her. In the Neo-Assyrian and Neo-Babylonian periods, dogs served as emblems of magical protection. In China, Korea, and Japan, dogs are viewed as kind protectors. In mythology, dogs often appear as pets or as watchdogs. Stories of dogs guarding the gates of the underworld recur throughout Indo-European mythologies and may originate from Proto-Indo-European traditions. In Greek mythology, Cerberus is a three-headed, dragon-tailed watchdog who guards the gates of Hades. Dogs also feature in association with the Greek goddess Hecate. In Norse mythology, a dog called Garmr guards Hel, a realm of the dead. In Persian mythology, two four-eyed dogs guard the Chinvat Bridge. In Welsh mythology, Cŵn Annwn guards Annwn. In Hindu mythology, Yama, the god of death, owns two watchdogs named Shyama and Sharvara, which each have four eyes—they are said to watch over the gates of Naraka. A black dog is considered to be the vahana (vehicle) of Bhairava (an incarnation of Shiva). In Christianity, dogs represent faithfulness. Within the Roman Catholic denomination specifically, the iconography of Saint Dominic includes a dog after the saint's mother dreamt of a dog springing from her womb and became pregnant shortly after that. As such, the Dominican Order (Ecclesiastical Latin: Domini canis) means "dog of the Lord" or "hound of the Lord". In Christian folklore, a church grim often takes the form of a black dog to guard Christian churches and their churchyards from sacrilege. Jewish law does not prohibit keeping dogs and other pets but requires Jews to feed dogs (and other animals that they own) before themselves and to make arrangements for feeding them before obtaining them. The view on dogs in Islam is mixed, with some schools of thought viewing them as unclean, although Khaled Abou El Fadl states that this view is based on "pre-Islamic Arab mythology" and "a tradition [...] falsely attributed to the Prophet". The Sunni Maliki school jurists disagree with the idea that dogs are unclean. Terminology Dog – the species (or subspecies) as a whole, also any male member of the same. Bitch – any female member of the species (or subspecies). Puppy or pup – a young member of the species (or subspecies) under 12 months old. Sire – the male parent of a litter. Dam – the female parent of a litter. Litter – all of the puppies resulting from a single whelping. Whelping – the act of a bitch giving birth. Whelps – puppies still dependent upon their dam. See Also Saint Guinefort
Biology and health sciences
Biology
null
4270546
https://en.wikipedia.org/wiki/Calocedrus%20decurrens
Calocedrus decurrens
Calocedrus decurrens, with the common names incense cedar and California incense cedar (syn. Libocedrus decurrens Torr.), is a species of coniferous tree native to western North America. It is the most widely known species in the genus, and is often simply called incense cedar without the regional qualifier. Description Calocedrus decurrens is a large tree, typically reaching heights of and a trunk diameter up to . The largest known tree, located in Klamath National Forest, Siskiyou County, California, is tall with a circumference trunk and a spread. Specimens form a broad conic crown of spreading branches. The bark is orange-brown weathering grayish, smooth at first, becoming fissured and exfoliating in long strips on the lower trunk on old trees. Specimens can live to over 500 years old. The foliage is produced in flattened sprays with scale-like leaves long; they are arranged in opposite decussate pairs, with the successive pairs closely then distantly spaced, so forming apparent whorls of four; the facial pairs are flat, with the lateral pairs folded over their bases. The leaves are bright green on both sides of the shoots, with only inconspicuous stomata. The foliage, when crushed, gives off an aroma somewhat akin to shoe-polish. The seed cones are long, pale green to yellow, with four (rarely six) scales arranged in opposite decussate pairs; the outer pair of scales each bears two winged seeds, the inner pair(s) usually being sterile and fused together in a flat plate. The cones turn orange to yellow-brown when mature about 8 months after pollination. The pollen cones are long. Distribution The bulk of the tree's range is in the United States, from central-southwestern Oregon through most of California and the extreme west of Nevada, as well as a short distance into northwest Mexico in northern Baja California. It grows at altitudes of . Ecology At lower elevations, associated trees include oaks and ponderosa pine. Giant sequoia bears similarities to the species, but has sharp leaves. In the south–southwest U.S. some have confused bushy junipers for incense cedar. With its thick basal bark, the incense cedar is one of the most fire- and drought-tolerant plants in California. Although the tree is killed by hot, stand-replacing crown fire, it spreads rapidly after lower-intensity burns. This has given the incense cedar a competitive advantage over other species such as the bigcone Douglas-fir in recent years. Incense cedar is more shade tolerant than Douglas-fir, but not as much so as grand or white fir. It grows slowly when needed to outlast competition. This tree is the preferred host of a wood wasp, Syntexis libocedrii a species which lays its eggs in the smoldering wood immediately after a forest fire. The tree is also host to incense-cedar mistletoe (Phoradendron libocedri), a parasitic plant which can often be found hanging from its branches. Fire scars provide an entry point for Tyromyces amarus (pocket dry rot). Gymnosporangium rust disease afflicts the trees, but is rarely fatal. For numerous birds during the wintertime, Calocedrus decurrens has been seen to be used for foraging. According to the United States Department of Agriculture, in areas of the Western Sierra Nevada in California, numerous species of birds are thought to use the incense cedar as a "foraging substrate" so that they can attain as much food as needed. Human impacts on these trees due to forest management practices have caused issues for many of these birds, threatening the use of the incense cedar as a forage substrate. Uses The wood is soft and light, and has a pleasant odor and is generally resistant to rot. It has been used for external house siding, interior paneling, and to make moth-resistant hope chests. It was once the primary material for wooden pencils, because it is soft and tends to sharpen easily without forming splinters. Native Americans Indigenous peoples of California use the plant in traditional medicine, basket making, hunting bows, building materials, and to produce fire by friction. A Northern California tribe used branchlets to filter out sand from water when leaching toxins from acorn meal; foliage also served as a flavoring. The Maidu Concow tribe name for the plant is hö'-tä (Konkow language). Cultivation Calocedrus decurrens is cultivated as an ornamental tree, for planting in gardens and parks. It is used in traditional, xeriscapic, native plant, and wildlife gardens; and also in designed natural landscaping and habitat restoration projects in California. It is valued for its columnar form and evergreen foliage textures. The tree is also grown in gardens and parks in cool summer climates, including the Pacific Northwest in the Northwestern United States and British Columbia, eastern Great Britain and continental Northern Europe. In these areas it can develop an especially narrow columnar crown, an unexplained consequence of the cooler climatic conditions that is rare in trees within its warm summer natural range in the California Floristic Province. Other cultivated species from the family Cupressaceae can have similar crown forms. Award of Garden Merit This plant has gained the Royal Horticultural Society's Award of Garden Merit, and has the cultivar 'Berrima Gold'. Essential oils Various species in the family Cupressaceae can be utilized for the creation of essential oils. Scientific studies have shown that these essential oils have "strong antimicrobial properties." Antimicrobial properties are those properties of a substance that lower the levels of microbes, such as bacteria and viruses. These antimicrobial properties could potentially be used for therapies in developing countries, although more testing and clinical trials should be done before such measures are implemented.
Biology and health sciences
Cupressaceae
Plants
1622030
https://en.wikipedia.org/wiki/Walker%20circulation
Walker circulation
The Walker circulation, also known as the Walker cell, is a conceptual model of the air flow in the tropics in the lower atmosphere (troposphere). According to this model, parcels of air follow a closed circulation in the zonal and vertical directions. This circulation, which is roughly consistent with observations, is caused by differences in heat distribution between ocean and land. In addition to motions in the zonal and vertical direction the tropical atmosphere also has considerable motion in the meridional direction as part of, for example, the Hadley Circulation. The Walker circulation is associated with the pressure gradient force that results from a high pressure system over the eastern Pacific Ocean, and a low pressure system over Indonesia. The Walker circulations of the tropical Indian, Pacific, and Atlantic basins result in westerly surface winds in northern summer in the first basin and easterly winds in the second and third basins. As a result, the temperature structure of the three oceans display dramatic asymmetries. The equatorial Pacific and Atlantic both have cool surface temperatures in northern summer in the east, while cooler surface temperatures prevail only in the western Indian Ocean. These changes in surface temperature reflect changes in the depth of the thermocline. Changes in the Walker circulation with time occur in conjunction with changes in surface temperature. Some of these changes are forced externally, such as the seasonal shift of the sun into the Northern Hemisphere in summer. Other changes appear to be the result of coupled ocean-atmosphere feedback in which, for example, easterly winds cause the sea surface temperature to fall in the east, enhancing the zonal heat contrast and hence intensifying easterly winds across the basin. These anomalous easterlies induce more equatorial upwelling and raise the thermocline in the east, amplifying the initial cooling by the southerlies. From an oceanographic point of view, the equatorial cold tongue is caused by easterly winds. Were the Earth climate symmetric about the equator, cross-equatorial wind would vanish, and the cold tongue would be much weaker and have a very different zonal structure than is observed today. The Walker circulation was discovered by Gilbert Walker. The term "Walker circulation" was coined in 1969 by the Norwegian-American meteorologist Jacob Bjerknes. Walker's methodology Gilbert Walker was an established applied mathematician at the University of Cambridge when he became director-general of observatories in India in 1904. While there, he studied the characteristics of the Indian Ocean monsoon, the failure of whose rains had brought severe famine to the country in 1899. Analyzing vast amounts of weather data from India and the rest of the world, over the next fifteen years he published the first descriptions of the great seesaw oscillation of atmospheric pressure between the Indian and Pacific Ocean, and its correlation to temperature and rainfall patterns across much of the Earth's tropical regions, including India. He also worked with the Indian Meteorological Department especially in linking the monsoon with Southern Oscillation phenomenon. He was made a Companion of the Order of the Star of India in 1911. Walker determined that the time scale of a year (used by many studying the atmosphere) was unsuitable because geospatial relationships could be entirely different depending on the season. Thus, Walker broke his temporal analysis into December–February, March–May, June–August, and September–November. Walker then selected a number of "centers of action", which included areas such as the Indian Peninsula. The centers were in the hearts of regions with either permanent or seasonal high and low pressures. He also added points for regions where rainfall, wind or temperature was an important control. He examined the relationships of the summer and winter values of pressure and rainfall, first focusing on summer and winter values, and later extending his work to the spring and autumn. He concludes that variations in temperature are generally governed by variations in pressure and rainfall. It had previously been suggested that sunspots could be the cause of the temperature variations, but Walker argued against this conclusion by showing monthly correlations of sunspots with temperature, winds, cloud cover, and rain that were inconsistent. Walker made it a point to publish all of his correlation findings, both of relationships found to be important as well as relationships that were found to be unimportant. He did this for the purpose of dissuading researchers from focusing on correlations that did not exist. Oceanic effects The Walker Circulations of the tropical Indian, Pacific, and Atlantic basins result in westerly surface winds in Northern Summer in the first basin and easterly winds in the second and third basins. As a result, the temperature structure of the three oceans display dramatic asymmetries. The equatorial Pacific and Atlantic both have cool surface temperatures in Northern Summer in the east, while cooler surface temperatures prevail only in the western Indian Ocean. These changes in surface temperature reflect changes in the depth of the thermocline. Changes in the Walker Circulation with time occur in conjunction with changes in surface temperature. Some of these changes are forced externally, such as the seasonal shift of the Sun into the Northern Hemisphere in summer. Other changes appear to be the result of coupled ocean-atmosphere feedback in which, for example, easterly winds cause the sea surface temperature to fall in the east, enhancing the zonal heat contrast and hence intensifying easterly winds across the basin. These enhanced easterlies induce more equatorial upwelling and raise the thermocline in the east, amplifying the initial cooling by the southerlies. This coupled ocean-atmosphere feedback was originally proposed by Bjerknes. From an oceanographic point of view, the equatorial cold tongue is caused by easterly winds. Were the earth climate symmetric about the equator, cross-equatorial wind would vanish, and the cold tongue would be much weaker and have a very different zonal structure than is observed today. The Walker cell is indirectly related to upwelling off the coasts of Peru and Ecuador. This brings nutrient-rich cold water to the surface, increasing fishing stocks. El Niño–Southern Oscillation The Walker circulation is caused by the pressure gradient force that results from a high pressure system over the eastern Pacific Ocean and a low pressure system over Indonesia. The Walker circulation causes an upwelling of cold deep sea water, thus cooling the sea surface. El Niño results when this circulation decreases or stops, as the impaired or inhibited circulation causes the ocean surface to warm to above average temperatures. A markedly increased Walker circulation causes a La Niña by intensifying the upwelling of cold deep sea water; which cools the sea surface to below average temperatures. During non-El Niño conditions, the Walker circulation is seen at the surface as easterly trade winds that move water and air warmed by the sun toward the west. This also creates ocean upwelling off the coasts of Peru and Ecuador and brings nutrient-rich cold water to the surface, increasing fishing stocks. The western side of the equatorial Pacific is characterized by warm, wet, low-pressure weather as the collected moisture is dumped in the form of typhoons and thunderstorms. The ocean is some higher in the western Pacific as the result of this motion. A scientific study published in May 2006 in the journal Nature indicates that the Walker circulation has been slowing since the mid-19th century. The authors argue that global warming is a likely causative factor in the weakening of the wind pattern. However, a 2011 study from The Twentieth Century Reanalysis Project shows that, aside from El Niño–Southern Oscillation cycles, the overall speed and direction of the Walker circulation remained steady between 1871 and 2008.
Physical sciences
Atmospheric circulation
null
1624795
https://en.wikipedia.org/wiki/Weak%20hypercharge
Weak hypercharge
In the Standard Model of electroweak interactions of particle physics, the weak hypercharge is a quantum number relating the electric charge and the third component of weak isospin. It is frequently denoted and corresponds to the gauge symmetry U(1). It is conserved (only terms that are overall weak-hypercharge neutral are allowed in the Lagrangian). However, one of the interactions is with the Higgs field. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time even in vacuum. This changes their weak hypercharge (and weak isospin ). Only a specific combination of them, (electric charge), is conserved. Mathematically, weak hypercharge appears similar to the Gell-Mann–Nishijima formula for the hypercharge of strong interactions (which is not conserved in weak interactions and is zero for leptons). In the electroweak theory SU(2) transformations commute with U(1) transformations by definition and therefore U(1) charges for the elements of the SU(2) doublet (for example lefthanded up and down quarks) have to be equal. This is why U(1) cannot be identified with U(1)em and weak hypercharge has to be introduced. Weak hypercharge was first introduced by Sheldon Glashow in 1961. Definition Weak hypercharge is the generator of the U(1) component of the electroweak gauge group, and its associated quantum field mixes with the electroweak quantum field to produce the observed gauge boson and the photon of quantum electrodynamics. The weak hypercharge satisfies the relation where is the electric charge (in elementary charge units) and is the third component of weak isospin (the SU(2) component). Rearranging, the weak hypercharge can be explicitly defined as: where "left"- and "right"-handed here are left and right chirality, respectively (distinct from helicity). The weak hypercharge for an anti-fermion is the opposite of that of the corresponding fermion because the electric charge and the third component of the weak isospin reverse sign under charge conjugation. The sum of −isospin and +charge is zero for each of the gauge bosons; consequently, all the electroweak gauge bosons have Hypercharge assignments in the Standard Model are determined up to a twofold ambiguity by requiring cancellation of all anomalies. Alternative half-scale For convenience, weak hypercharge is often represented at half-scale, so that which is equal to just the average electric charge of the particles in the isospin multiplet. Baryon and lepton number Weak hypercharge is related to baryon number minus lepton number via: where X is a conserved quantum number in GUT. Since weak hypercharge is always conserved within the Standard Model and most extensions, this implies that baryon number minus lepton number is also always conserved. Neutron decay Hence neutron decay conserves baryon number and lepton number separately, so also the difference is conserved. Proton decay Proton decay is a prediction of many grand unification theories. Hence this hypothetical proton decay would conserve , even though it would individually violate conservation of both lepton number and baryon number.
Physical sciences
Quantum numbers
Physics
1624916
https://en.wikipedia.org/wiki/Pachycephalosauria
Pachycephalosauria
Pachycephalosauria (; from Greek παχυκεφαλόσαυρος for 'thick headed lizards') is a clade of ornithischian dinosaurs. Along with Ceratopsia, it makes up the clade Marginocephalia. With the exception of two species, most pachycephalosaurs lived during the Late Cretaceous Period, dating between about 85.8 and 66 million years ago. They are exclusive to the Northern Hemisphere, all of them being found in North America and Asia. They were all bipedal, herbivorous/omnivorous animals with thick skulls. Skulls can be domed, flat, or wedge-shaped depending on the species, and are all heavily ossified. The domes were often surrounded by nodes and/or spikes. Partial skeletons have been found of several pachycephalosaur species, but to date no complete skeletons have been discovered. Often isolated skull fragments are the only bones that are found. Candidates for the earliest-known pachycephalosaur include Ferganocephale adenticulatum from Middle Jurassic Period strata of Kyrgyzstan and Stenopelix valdensis from Early Cretaceous strata of Germany, although R.M. Sullivan has doubted that either of these species are pachycephalosaurs. Albalophosaurus from the Early Cretaceous strata of Japan might also represent a basal pachycephalosaur. The oldest known definitive pachycephalosaur is Sinocephale bexelli from the Late Cretaceous of China. In 2017, a phylogenetic analysis conducted by Han and colleagues identified Stenopelix as a member of the Ceratopsia. Description Pachycephalosaurs were bipedal ornithischians characterized by their thickened skulls. They had a bulky torso with an expanded gut cavity and broad hips, short forelimbs, long legs, a short, thick neck, and a heavy tail. Large orbits and a large optic nerve point to pachycephalosaurs having good vision, and uncharacteristically large olfactory lobes indicate that they had a good sense of smell relative to other dinosaurs. They were fairly small dinosaurs, with most falling in the range of in length and the largest, Pachycephalosaurus wyomingensis, estimated to measure long and weigh . The characteristic skull of pachycephalosaurs is a result of the fusion and thickening of the frontals and parietals, accompanied by the closing of the supratemporal fenestra. In some species this takes the form of a raised dome; in others, the skull is flat or wedge-shaped. While the flat-headed pachycephalosaurs are traditionally regarded as distinct species or even families, they may represent juveniles of dome-headed adults. All display highly ornamented jugals, squamosals, and postorbitals in the form of blunt horns and nodes. Many species are only known from skull fragments, and a complete pachycephalosaur skeleton is yet to be found. Members of Pachycephalosauria characteristically have an unusually domed head reminiscent of the earlier Protopyknosia in an example of convergent evolution. Classification Most pachycephalosaurid remains are not complete, usually consisting of portions of the frontoparietal bone that forms the distinctive dome. This can make taxonomic identification a difficult task, as the classification of genera and species within Pachycephalosauria relies almost entirely on cranial characteristics. Consequently, improper species have historically been appointed to the clade. For instance, Majungatholus, once thought to be a pachycephalosaur, is now recognized as a specimen of the abelisaurid theropod Majungasaurus, and Yaverlandia, another dinosaur initially described as a pachycephalosaurid, has also since been reclassified as a coelurosaur (Naish in ). Further complicating matters are the diverse interpretations of ontogenetic and sexual features in pachycephalosaurs. A 2009 paper proposed that Dracorex and Stygimoloch were just early growth stages of Pachycephalosaurus, rather than distinct genera. A 2020 reworking of Cerapoda by Dieudonné et al. recovered the animals traditionally considered 'heterodontosaurids' as a basal grouping within Pachycephalosauria, paraphyletic with respect to the traditional, dome-headed pachycephalosaurs. The same conclusion had previously been reached by George Olshevsky in 1991, who classified heterodontosaurids as basal pachycephalosaurs on the basis of perceived cranial kinesis, the presence of fanglike premaxillary teeth, and the prominent diastema present in many genera. Taxonomy The Pachycephalosauria was first named as a suborder of the order Ornithischia by . They included within it only one family, the Pachycephalosauridae. Later researchers, such as Michael Benton, have ranked it as an infraorder of a suborder Cerapoda, which unites the ceratopsians and ornithopods. In 2006, Robert Sullivan published a re-evaluation of pachycephalosaur taxonomy. Sullivan considered attempts by Maryańska and Osmólska to restrict the definition of Pachycephalosauria redundant with their Pachycephalosauridae, since they were diagnosed by the same anatomical characters. Sullivan also rejected attempts by , in his phylogenetic studies, to re-define Pachycephalosauridae to include only "dome-skulled" species (including Stegoceras and Pachycephalosaurus), while leaving more "basal" species outside that family in Pachycephalosauria. Therefore, Sullivan's use of Pachycephalosauridae is equivalent to Sereno and Benton's use of Pachycephalosauria. Sullivan diagnosed the Pachycephalosauridae-based only on characters of the skull, with the defining character being a dome-shaped frontoparietal skull bone. According to Sullivan, the absence of this feature in some species assumed to be primitive led to the split in classification between domed and non-domed pachycephalosaurs; however, discovery of more advanced and possibly juvenile pachycephalosaurs with flat skulls (such as Dracorex hogwartsia) show this distinction to be incorrect. Sullivan also pointed out that the original diagnosis of Pachycephalosauridae centered around "flat to dome-like" skulls, so the flat-headed forms should be included in the family. In a paper published in 2003, Thomas E. Williamson and Thomas D. Carr discovered a clade of the Pachycephalosauridae that was a sister taxa to the genus Stegoceras, made up of "all other dome-headed pachycephalosaurians; this was referred to as Pachycephalosaurinae Phylogeny Phylogenetic analyses by many authors have found Pachycephalosauria to be a group with Stegoceras as one of the earliest fully-domed members, with flat-headed and potentially juvenile taxa like Homalocephale and Goyocephale either just outside or just within the clade of it and more derived pachycephalosaurs. These studies began with the phylogenetic work of Paul Sereno, which has been modified in many iterations to include newer taxa and additional characters. The version of the analysis published by Woodruff and colleagues in 2023 is below. Below is a cladogram published by Dieudonné and colleagues (2020) which controversially found heterodontosauridae to be paraphyletic with respect to pachycephalosauria. This analysis was proposed as a hypothesis for the complete lack of Jurassic and Early Cretaceous pachycephalosaur fossils, even though they should have existed if the modern understanding of ornithischian phylogeny is correct. However, this hypothesis has not been widely accepted by other paleontologists. Paleobiology Feeding The small size of most pachycephalosaur species and lack of skeletal adaptation indicates that they were not climbers and primarily ate food close to the ground. Mallon et al. (2013) examined herbivore coexistence on the island continent of Laramidia, during the Late Cretaceous and concluded that pachycephalosaurids were generally restricted to feeding on vegetation at, or below, the height of 1 meter. They exhibit heterodonty, having different tooth morphology between the premaxillary teeth and maxillary teeth. Front teeth are small and peg-like with an ovular cross section and were most likely used for grabbing food. In some species, the last premaxillary tooth was enlarged and canine-like. Back teeth are small and triangular with denticles on the front and back of the crown, used for mouth processing. In species in which the dentary has been found, mandibular teeth are similar in size and shape to those in the upper jaw. Wear patterns on the teeth vary by species, indicating a range of food preferences which could include seeds, stems, leaves, fruits, and possibly insects. A very wide rib cage and large gut cavity extending all the way to the base of the tail suggests the use of fermentation to digest food. Head-butting behavior The adaptive significance of the skull dome has been heavily debated. The popular hypothesis among the general public that the skull was used in head-butting, as sort of a dinosaurian battering ram, was first proposed by . This view was popularized in the 1956 science fiction story "A Gun for Dinosaur" by L. Sprague de Camp. Many paleontologists have since argued for the head-butting hypothesis, including and . In this hypothesis, pachycephalosaurs rammed each other head-on, as do modern-day bighorn sheep and musk oxen. Anatomical evidence for combative behavior includes vertebral articulations providing spinal rigidity, and the shape of the back indicating strong neck musculature. It has been suggested that a pachycephalosaur could make its head, neck, and body horizontally straight, in order to transmit stress during ramming. However, in no known dinosaur could the head, neck, and body be oriented in such a position. Instead, the cervical and anterior dorsal vertebrae of pachycephalosaurs show that the neck was carried in an "S"- or U-shaped curve. Also, the rounded shape of the skull would lessen the contacted surface area during head-butting, resulting in glancing blows. Other possibilities include flank-butting, defense against predators, or both. The relatively wide build of pachycephalosaurs (which would protect vital internal organs from harm during flank-butting) and the squamosal horns of the Stygimoloch (which would have been used to great effect during flank-butting) add credence to the flank-butting hypothesis. A histological study conducted by argued against the battering ram hypothesis. They argued that the dome was "an ephemeral ontogenetic stage", the spongy bone structure could not sustain the blows of combat, and the radial pattern was simply an effect of rapid growth. Later biomechanical analyses by and concluded, however, that the domes could withstand combat stresses. argued that the growth patterns discussed by Goodwin and Horner are not inconsistent with head-butting behavior. instead argued that the dome functioned for species recognition. There is evidence that the dome had some form of external covering, and it is reasonable to consider the dome may have been brightly covered, or subject to change color seasonally. Due to the nature of the fossil record, however, it cannot be observed whether or not color played a role in dome function. argued that species recognition is an unlikely evolutionary cause for the dome, because dome forms are not notably different between species. Because of this general similarity, several genera of Pachycephalosauridae have sometimes been incorrectly lumped together. This is unlike the case in ceratopsians and hadrosaurids, which had much more distinct cranial ornamentation. Longrich et al. argued that instead the dome had a mechanical function, such as combat, one which was important enough to justify the resource investment. Dome paleopathology studied cranial pathologies among the Pachycephalosauridae and found that 22% of all domes examined had lesions that are consistent with osteomyelitis, an infection of the bone resulting from penetrating trauma, or trauma to the tissue overlying the skull leading to an infection of the bone tissue. This high rate of pathology lends more support to the hypothesis that pachycephalosaurid domes were employed in intra-specific combat. The frequency of trauma was comparable across the different genera in this family, despite the fact that these genera vary with respect to the size and architecture of their domes, and fact that they existed during varying geologic periods. These findings were in stark contrast with the results from analysis of the relatively flat-headed pachycephalosaurids, where there was an absence of pathology. This would support the hypothesis that these individuals represent either females or juveniles, where intra-specific combat behavior is not expected. Histological examination reveals that pachycephalosaurid domes are composed of a unique form of fibrolamellar bone which contains fibroblasts that play a critical role in wound healing, and are capable of rapidly depositing bone during remodeling. Peterson et al. (2013) concluded that, taken together, the frequency of lesion distribution and the bone structure of frontoparietal domes lend strong support to the hypothesis that pachycephalosaurids used their unique cranial structures for agonistic behavior. Paleoecology The Asian and North American species of pachycephalosaurs lived in markedly different environments. Asian specimens are normally more intact, indicating they were not transported far from their place of death before fossilization. They likely lived in a large desert region in central Asia with a hot and arid climate. North American specimens are typically found in rocks that were formed by erosion from the Rocky Mountains. Specimens are far less intact; usually only skull caps are recovered, and those found regularly exhibit surface exfoliation and other signs that they were transported long distances by water before fossilization. It is assumed that they lived in the mountains in a temperate climate and were carried by erosion after death to their final resting place. Distribution Pachycephalosaurs lived exclusively in Laurasia, being found in western North America and central Asia. Pachycephalosaurs originated in Asia and had two major dispersal events, resulting in the two separate waves of pachycephalosaur evolution observed in Asia. The first, occurring before the late Santonian or early Campanian, involved a migration from Asia to North America, most likely by way of the Bering Land Bridge. This migration was by a common ancestor of Stygimoloch, Stegoceras, Tylocephale, Prenocephale, and Pachycephalosaurus. The second event occurred before the middle Campanian, and involved a migration back into Asia from North America by a common ancestor of Prenocephale and Tylocephale. Two species originally reported to be pachycephalosaurs discovered outside this range, Yaverlandia bitholus of England and Majungatholus atopus of Madagascar, have recently been shown to actually be theropods.
Biology and health sciences
Ornitischians
Animals
8932473
https://en.wikipedia.org/wiki/Hypoesthesia
Hypoesthesia
Hypoesthesia or numbness is a common side effect of various medical conditions that manifests as a reduced sense of touch or sensation, or a partial loss of sensitivity to sensory stimuli. In everyday speech this is generally referred to as numbness. Hypoesthesia primarily results from damage to nerves, and from blockages in blood vessels, resulting in ischemic damage to tissues supplied by the blocked blood vessels. This damage is detectable through the use of various imaging studies. Damage in this way is caused by a variety of different illnesses and diseases. A few examples of the most common illnesses and diseases that can cause hypoesthesia as a side effect are as follows: Decompression sickness Trigeminal schwannoma Rhombencephalitis Intradural extramedullary tuberculoma of the spinal cord Cutaneous sensory disorder Beriberi Diseases Decompression sickness Decompression sickness occurs during rapid ascent, spanning 20 or more feet (typically from underwater). Decompression sickness may express itself in a variety of ways, including hypoesthesia. Hypoesthesia results because of air bubbles that form in blood, which prevents oxygenation of downstream tissue. In cases of decompression sickness, treatment to relieve hypoesthesia symptoms is quick and efficient. Hyperbaric oxygen is used to maintain long term stability, which includes breathing of oxygen at a level of 100%. Trigeminal schwannoma Trigeminal schwannoma is a condition in which a tumor forms on the trigeminal nerve (also known as cranial nerve five). This prevents sensation in the area associated with the nerve. In the case of the trigeminal nerve, this is the face, meaning hypoesthesia of the face is experienced. Excision is the only effective treatment of trigeminal schwannoma, though this may not treat the associated hypoesthesia if damage has already occurred. Following surgery, many patients still experienced hypoesthesia and some even experienced increased effects. Rhombencephalitis Rhombencephalitis involves bacterial invasion of the brainstem and trigeminal nerve, and has a wide variety of symptoms that may vary between patients. Similarly to the trigeminal schwanonoma mentioned above, this can result in facial hypoesthesia. Rhombencephalitis may also result in hypoesthesia of the V1 through V3 dermatomes. The main treatment option for this infection is antibiotics, such as ampicillin, to remove the bacteria. Intradural extramedullary tuberculoma of the spinal cord (IETSC) IETSC is a cancer of the spinal cord that involves hypoesthesia of all parts of the body associated with the affected spinal nerves. The inability to convey information from the body to the central nervous system will cause a total lack of feeling in the associated regions. Cutaneous sensory disorder Hypoesthesia is one of the negative sensory symptoms associated with cutaneous sensory disorder (CSD). In this condition, patients have abnormal disagreeable skin sensations that can be due to increased nervous system activity (stinging, itching or burning) or decreased nervous system activity (numbness or hypoesthesia). Beriberi Hypoesthesia originating in (and extending centrally from) the feet, fingers, navel, and/or lips is one of the common symptoms of beriberi, which is a set of symptoms caused by thiamine deficiency. Diagnosis A patient experiencing symptoms of hypoesthesia is often asked a series of questions to pinpoint the location and severity of the sensory disruption. A physical examination may follow where a doctor may tap lightly on the skin to determine how much feeling is present. Depending upon the location of the symptoms occurring, a doctor may recommend some tests to determine the overlying cause of the hypoesthesia. These tests include imaging computerized axial tomography (CT) and magnetic resonance imaging (MRI) scans, nerve conduction studies to measure electrical impulses passing through the nerves in search of damage to the nerves, and various reflex tests. An example of a reflex test would be the patellar reflex test. Treatment Treatment of hypoethesia are aimed at targeting the more broad disease or illnesses that has caused the side effect of sensation loss.
Biology and health sciences
Symptoms and signs
Health
8934260
https://en.wikipedia.org/wiki/VirtualBox
VirtualBox
Oracle VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox and InnoTek VirtualBox) is a hosted hypervisor for x86 virtualization developed by Oracle Corporation. VirtualBox was originally created by InnoTek Systemberatung GmbH, which was acquired by Sun Microsystems in 2008, which was in turn acquired by Oracle in 2010. VirtualBox may be installed on Microsoft Windows, macOS, Linux, Solaris and OpenSolaris. There are also ports to FreeBSD and Genode. It supports the creation and management of guest virtual machines running Windows, Linux, BSD, OS/2, Solaris, Haiku, and OSx86, as well as limited virtualization of guests on Apple hardware. For some guest operating systems, a "Guest Additions" package of device drivers and system applications is available, which typically improves performance, especially that of graphics, and allows changing the resolution of the guest OS automatically when the window of the virtual machine on the host OS is resized. Released under the terms of the GNU General Public License and, optionally, the CDDL for most files of the source distribution, VirtualBox is free and open-source software, though the Extension Pack is proprietary software, free of charge only to personal users. The License to VirtualBox was relicensed to GPLv3 with linking exceptions to the CDDL and other GPL-incompatible licenses. History VirtualBox was first offered by InnoTek Systemberatung GmbH, a German company based in Weinstadt, under a proprietary software license, making one version of the product available at no cost for personal or evaluation use, subject to the VirtualBox Personal Use and Evaluation License (PUEL). In January 2007, based on counsel by LiSoG, InnoTek released VirtualBox Open Source Edition (OSE) as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. InnoTek also contributed to the development of OS/2 and Linux support in virtualization and OS/2 ports of products from Connectix which were later acquired by Microsoft. Specifically, InnoTek developed the "additions" code in both Windows Virtual PC and Microsoft Virtual Server, which enables various host–guest OS interactions like shared clipboards or dynamic viewport resizing. Sun Microsystems acquired InnoTek in February 2008. Following the acquisition of Sun Microsystems by Oracle Corporation in January 2010, the product was re-branded as "Oracle VM VirtualBox". In December 2019, VirtualBox removed support for software-based virtualization and exclusively performs hardware-assisted virtualization. Release history Licensing The core package, since version 4 in December 2010, is free software under GNU General Public License version 2 (GPLv2). A supplementary package, under a proprietary license, adds support for USB 2.0 and 3.0 devices, Remote Desktop Protocol (RDP), disk encryption, NVMe, and Preboot Execution Environment (PXE). This package is called "VirtualBox Oracle VM VirtualBox extension pack". It includes closed-source components, so it is not source-available. The license is called Personal Use and Evaluation License (PUEL). It allows gratis access for personal use, educational use, and evaluation. Since VirtualBox version 5.1.30, Oracle defines personal use as installation on a single computer for non-commercial purposes. Prior to version 4, there were two different packages of the VirtualBox software. The full package was offered gratis under the PUEL, with licenses for other commercial deployment purchasable from Oracle. A second package called the VirtualBox Open Source Edition (OSE) was released under GPLv2. This removed the same proprietary components not available under GPLv2. , building the BIOS for VirtualBox requires the Open Watcom compiler, which is released under the Sybase Open Watcom Public License. The Open Source Initiative has approved this as "Open Source" but the Free Software Foundation and the Debian Free Software Guidelines do not consider it "free". VirtualBox has experimental support for macOS guests. However, macOS's end user license agreement does not permit running on non-Apple hardware. The operating system enforces this by calling the Apple System Management Controller (SMC), to verify the hardware's authenticity. All Apple machines have an SMC. Virtualization Users of VirtualBox can load multiple guest OSes under a single host operating-system (host OS). Each guest can be started, paused and stopped independently within its own virtual machine (VM). The user can independently configure each VM and run it under a choice of software-based virtualization or hardware assisted virtualization if the underlying host hardware supports this. The host OS and guest OSs and applications can communicate with each other through a number of mechanisms including a common clipboard and a virtualized network facility. Guest VMs can also directly communicate with each other if configured to do so. Hardware-assisted VirtualBox supports both Intel's VT-x and AMD's AMD-V hardware-assisted virtualization. Making use of these facilities, VirtualBox can run each guest VM in its own separate address-space; the guest OS ring 0 code runs on the host at ring 0 in VMX non-root mode rather than in ring 1. Starting with version 6.1, VirtualBox only supports this method. Until then, VirtualBox specifically supported some guests (including 64-bit guests, SMP guests and certain proprietary OSs) only on hosts with hardware-assisted virtualization. Devices and peripherals VirtualBox emulates hard disks in three formats: the native VDI (Virtual Disk Image), VMware's VMDK, and Microsoft's VHD. It thus supports disks created by other hypervisor software. VirtualBox can also connect to iSCSI targets and to raw partitions on the host, using either as virtual hard disks. VirtualBox emulates IDE (PIIX4 and ICH6 controllers), SCSI, SATA (ICH8M controller), and SAS controllers, to which hard drives can be attached. VirtualBox has supported Open Virtualization Format (OVF) since version 2.2.0 (April 2009). Both ISO images and physical devices connected to the host can be mounted as CD or DVD drives. VirtualBox supports running operating systems from live CDs and DVDs. By default, VirtualBox provides graphics support through a custom virtual graphics-card that is VBE or UEFI GOP compatible. The Guest Additions for Windows, Linux, Solaris, OpenSolaris, and OS/2 guests include a special video-driver that increases video performance and includes additional features, such as automatically adjusting the guest resolution when resizing the VM window and desktop composition via virtualized WDDM drivers. For an Ethernet network adapter, VirtualBox virtualizes these Network Interface Cards: AMD PCnet PCI II (Am79C970A) AMD PCnet-Fast III (Am79C973) Intel Pro/1000 MT Desktop (82540EM) Intel Pro/1000 MT Server (82545EM) Intel Pro/1000 T Server (82543GC) Paravirtualized network adapter (virtio-net) The emulated network cards allow most guest OSs to run without the need to find and install drivers for networking hardware as they are shipped as part of the guest OS. A special paravirtualized network adapter is also available, which improves network performance by eliminating the need to match a specific hardware interface, but requires special driver support in the guest. (Many distributions of Linux ship with this driver included.) By default, VirtualBox uses NAT through which Internet software for end-users such as Firefox or ssh can operate. Bridged networking via a host network adapter or virtual networks between guests can also be configured. Up to 36 network adapters can be attached simultaneously, but only four are configurable through the graphical interface. For a sound card, VirtualBox virtualizes Intel HD Audio, Intel ICH AC'97, and SoundBlaster 16 devices. A USB 1.1 controller is emulated, so that any USB devices attached to the host can be seen in the guest. The proprietary extension pack adds a USB 2.0 or USB 3.0 controller and, if VirtualBox acts as an RDP server, it can also use USB devices on the remote RDP client, as if they were connected to the host, although only if the client supports this VirtualBox-specific extension (Oracle provides clients for Solaris, Linux, and Sun Ray thin clients that can do this, and has promised support for other platforms in future versions). Software-based In the absence of hardware-assisted virtualization, versions 6.0.24 and earlier of VirtualBox could adopt a standard software-based virtualization approach. This mode supports 32-bit guest operating systems which run in rings 0 and 3 of the Intel ring architecture. The system reconfigures the guest OS code, which would normally run in ring 0, to execute in ring 1 on the host hardware. Because this code contains many privileged instructions which cannot run natively in ring 1, VirtualBox employs a Code Scanning and Analysis Manager (CSAM) to scan the ring 0 code recursively before its first execution to identify problematic instructions and then calls the Patch Manager (PATM) to perform in-situ patching. This replaces the instruction with a jump to a VM-safe equivalent compiled code fragment in hypervisor memory. The guest user-mode code, running in ring 3, generally runs directly on the host hardware in ring 3. In both cases, VirtualBox uses CSAM and PATM to inspect and patch the offending instructions whenever a fault occurs. VirtualBox also contains a dynamic recompiler, based on QEMU to recompile any real mode or protected mode code entirely (e.g. BIOS code, a DOS guest, or any operating system startup). Using these techniques, VirtualBox could achieve performance comparable to that of VMware in its later versions. The feature was dropped starting with VirtualBox 6.1. Features Snapshots of the RAM and storage that allow reverting to a prior state. Screenshots and screen video capture "Host key" for releasing the keyboard and mouse cursor to the host system if captured (coupled) to the guest system, and for keyboard shortcuts to features such as configuration, restarting, and screenshot. By default, it is the right-side key, or on Mac, the left key. Mouse pointer integration, meaning automatic coupling and uncoupling of mouse cursor when moved inside and outside the virtual screen, if supported by guest operating system. Seamless mode – the ability to run virtualized applications side by side with normal desktop applications Shared clipboard Shared folders through "guest additions" software Special drivers and utilities to facilitate switching between systems Ability to specify amount of shared RAM, video memory, and CPU execution cap Ability to emulate multiple screens Command line interaction (in addition to the GUI) Public API (Java, Python, SOAP, XPCOM) to control VM configuration and execution Nested paging for AMD-V and Intel VT (only for processors supporting SLAT and with SLAT enabled) Limited support for 3D graphics acceleration (including OpenGL up to (but not including) 3.0 and Direct3D 9.0c via Wine's Direct3D to OpenGL translation in versions prior to 7.0 or DXVK in later releases) SMP support (up to 32 virtual CPUs per virtual machine), since version 3.0 Teleportation (aka Live Migration) 2D video output acceleration (not to be mistaken with video decoding acceleration), since version 3.1 EFI has been supported since version 3.1 (Windows 7 guests are not supported) Storage emulation Ability to mount virtual hard disk drives and disk images. Virtual optical disc images can be used for booting and sharing files to guest systems lacking networking support. NCQ support for SATA, SCSI and SAS raw disks and partitions SATA disk hotplugging Pass-through mode for solid-state drives Pass-through mode for CD/DVD/BD drives – allows users to play audio CDs, burn optical disks, and play encrypted DVD discs Can disable host OS I/O cache Allows limitation of IO bandwidth PATA, SATA, SCSI, SAS, iSCSI, floppy disk controllers VM disk image encryption using AES128/AES256 Storage support includes: Raw hard disk access – allows physical hard disk partitions on the host system to appear in the guest system VMware Virtual Machine Disk (VMDK) format support – allows exchange of disk images with VMware Microsoft VHD support QEMU qed and qcow disks HDD format disks (only version 2; versions 3 and 4 are not supported) used by Parallels virtualization products Limitations 3D graphics acceleration for Windows guests earlier than Windows 7 was removed in version 6.1. This affected Windows XP and Windows Vista. VirtualBox has a very low transfer rate to and from USB2 devices. For USB3 equipment, device pass-through does not work in older guest OSes, such as Windows Vista and Windows XP, which lack appropriate drivers. However, since version 5.0, VirtualBox has added an experimental USB3 controller (the Renesas uPD720201 xHCI), which enables USB3 in these operating systems. This requires editing some configuration files. Guest Additions for macOS are unavailable at this time. Native Guest Additions for Windows 9x (Windows 95, 98 and ME) are not available. This results in poor performance due to the lack of graphics acceleration with the default limited color depth. External third-party software is available to enable support for 32-bit color mode, resulting in better performance. EFI support is incomplete, e.g. EFI boot for a Windows 7 guest is not supported. Only older versions of DirectX and OpenGL pass-through are supported (the feature can be enabled using the 3D Acceleration option for each VM individually). Video RAM is limited to 128 MiB (256 MiB with 2D Video Acceleration enabled) due to technical difficulties (merely changing the GUI to allow the user to allocate more video RAM to a VM or manually editing the configuration file of a VM won't work and will result in a fatal error). Windows 95/98/98SE/ME cannot be installed or work unreliably with modern CPUs (AMD Zen and newer; Intel Tiger Lake and newer) and hardware assisted virtualization (VirtualBox 6.1 and higher). This is due to these OSes not being coded correctly. An open source patch has been developed to fix the issue which also addresses Windows 95/98/98SE bug which makes the system crash when running on new fast CPUs. VirtualBox 7.0 and later is required to run a pristine Windows 11 guest. Full compatibility with Windows 11 is achieved in VirtualBox version 7.0.14 and higher. Host OS The supported operating systems include: Windows 10 64-bit and higher. Support for 64-bit Windows was added with VirtualBox 1.5. Support for 32-bit Windows was removed in 6.0. Support for Windows 2000 was removed in version 1.6. Support for Windows XP was removed in version 5.0. Support for Windows Vista was removed in version 5.2. Support for Windows 7 (64-bit) was removed in version 6.1. Support for Windows 8 (64-bit) was removed in version 7.0. Support for Windows 8.1 (64-bit) was removed in version 7.1. Windows Server 2019 and higher. Support for Windows Server 2003 was removed in 5.0. Support for Windows Server 2008 was removed in 6.0. Support for Windows Server 2008 R2 was removed in version 7.0. Support for Windows Server 2012 and 2016 was removed in version 7.1. Linux distributions macOS from version 11 (Big Sur) to 14 (Sonoma) both ARM and Intel versions: Preliminary Mac OS X support (beta stage) was added with VirtualBox 1.4, full support with 1.6. Support for Mac OS X 10.4 (Tiger) and earlier was removed with VirtualBox 3.1. Support for Mac OS X 10.5 (Leopard) was removed with VirtualBox 4.2. Support for Mac OS X 10.6 (Snow Leopard) and 10.7 (Lion) was removed with VirtualBox 5.0. Support for Mac OS X 10.8 (Mountain Lion) was removed with VirtualBox 5.1. Support for Mac OS X 10.9 (Mavericks) was removed with VirtualBox 5.2. Support for Mac OS X 10.10 (Yosemite) and OS X 10.11 (El Capitan) was removed with VirtualBox 6.0. Support for macOS 10.12 (Sierra) was officially removed with VirtualBox 6.1 (as of 6.1.16 it will still install and run, however). Support for macOS 10.13 (High Sierra) and macOS 10.14 (Mojave) was officially removed with VirtualBox 7.0. Support for macOS 10.15 (Catalina) was officially removed with VirtualBox 7.1. Oracle Solaris Guest additions Some features require the installation of the closed-source "VirtualBox Extension Pack": Support for a virtual USB 2.0/3.0 controller (EHCI/xHCI) (Starting with VirtualBox 7.0, this functionality was integrated into the GPL version instead.) VirtualBox RDP: support for the proprietary remote connection protocol developed by Microsoft and Citrix Systems. PXE boot for Intel cards. VM disk image encryption Webcam support While VirtualBox itself is free to use and is distributed under an open source license the VirtualBox Extension Pack is licensed under the VirtualBox Personal Use and Evaluation License (PUEL). Personal use of the extension pack is free but commercial users need to purchase a license. Guest Additions are installed within each guest virtual machine which supports them; the Extension Pack is installed on the host running VirtualBox.
Technology
Virtualization
null
9628780
https://en.wikipedia.org/wiki/Animal%20migration
Animal migration
Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating. To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern. Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices. Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles. Overview Concepts Migration can take very different forms in different species, and has a variety of causes. As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Within a migratory species or even within a single population, often not all individuals migrate. Complete migration is when all individuals migrate, partial migration is when some individuals migrate while others do not, and differential migration is when the difference between migratory and non-migratory individuals is based on discernible characteristics like age or sex. Irregular (non-cyclical) migrations such as irruptions can occur under pressure of famine, overpopulation of a locality, or some more obscure influence. Seasonal Seasonal migration is the movement of various species from one habitat to another during the year. Resource availability changes depending on seasonal fluctuations, which influence migration patterns. Some species such as Pacific salmon migrate to reproduce; every year, they swim upstream to mate and then return to the ocean. Temperature is a driving factor of migration that is dependent on the time of year. Many species, especially birds, migrate to warmer locations during the winter to escape poor environmental conditions. Circadian Circadian migration is where birds utilise circadian rhythm (CR) to regulate migration in both fall and spring. In circadian migration, clocks of both circadian (daily) and circannual (annual) patterns are used to determine the birds' orientation in both time and space as they migrate from one destination to the next. This type of migration is advantageous in birds that, during the winter, remain close to the equator, and also allows the monitoring of the auditory and spatial memory of the bird's brain to remember an optimal site of migration. These birds also have timing mechanisms that provide them with the distance to their destination. Tidal Tidal migration is the use of tides by organisms to move periodically from one habitat to another. This type of migration is often used in order to find food or mates. Tides can carry organisms horizontally and vertically for as little as a few nanometres to even thousands of kilometres. The most common form of tidal migration is to and from the intertidal zone during daily tidal cycles. These zones are often populated by many different species and are rich in nutrients. Organisms like crabs, nematodes, and small fish move in and out of these areas as the tides rise and fall, typically about every twelve hours. The cycle movements are associated with foraging of marine and bird species. Typically, during low tide, smaller or younger species will emerge to forage because they can survive in the shallower water and have less chance of being preyed upon. During high tide, larger species can be found due to the deeper water and nutrient upwelling from the tidal movements. Tidal migration is often facilitated by ocean currents. Diel While most migratory movements occur on an annual cycle, some daily movements are also described as migration. Many aquatic animals make a diel vertical migration, travelling a few hundred metres up and down the water column, while some jellyfish make daily horizontal migrations of a few hundred metres. In specific groups Different kinds of animals migrate in different ways. In birds Approximately 1,800 of the world's 10,000 bird species migrate long distances each year in response to the seasons. Many of these migrations are north-south, with species feeding and breeding in high northern latitudes in the summer and moving some hundreds of kilometres south for the winter. Some species extend this strategy to migrate annually between the Northern and Southern Hemispheres. The Arctic tern has the longest migration journey of any bird: it flies from its Arctic breeding grounds to the Antarctic and back again each year, a distance of at least , giving it two summers every year. Bird migration is controlled primarily by day length, signalled by hormonal changes in the bird's body. On migration, birds navigate using multiple senses. Many birds use a sun compass, requiring them to compensate for the sun's changing position with time of day. Navigation involves the ability to detect magnetic fields. In fish Most fish species are relatively limited in their movements, remaining in a single geographical area and making short migrations to overwinter, to spawn, or to feed. A few hundred species migrate long distances, in some cases of thousands of kilometres. About 120 species of fish, including several species of salmon, migrate between saltwater and freshwater (they are 'diadromous'). Forage fish such as herring and capelin migrate around substantial parts of the North Atlantic ocean. The capelin, for example, spawn around the southern and western coasts of Iceland; their larvae drift clockwise around Iceland, while the fish swim northwards towards Jan Mayen island to feed and return to Iceland parallel with Greenland's east coast. In the 'sardine run', billions of Southern African pilchard Sardinops sagax spawn in the cold waters of the Agulhas Bank and move northward along the east coast of South Africa between May and July. In insects Some winged insects such as locusts and certain butterflies and dragonflies with strong flight migrate long distances. Among the dragonflies, species of Libellula and Sympetrum are known for mass migration, while Pantala flavescens, known as the globe skimmer or wandering glider dragonfly, makes the longest ocean crossing of any insect: between India and Africa. Exceptionally, swarms of the desert locust, Schistocerca gregaria, flew westwards across the Atlantic Ocean for during October 1988, using air currents in the Inter-Tropical Convergence Zone. In some migratory butterflies, such as the monarch butterfly and the painted lady, no individual completes the whole migration. Instead, the butterflies mate and reproduce on the journey, and successive generations continue the migration. In mammals Some mammals undertake exceptional migrations; reindeer have one of the longest terrestrial migrations on the planet, reaching as much as per year in North America. However, over the course of a year, grey wolves move the most. One grey wolf covered a total cumulative annual distance of . Mass migration occurs in mammals such as the Serengeti 'great migration', an annual circular pattern of movement with some 1.7 million wildebeest and hundreds of thousands of other large game animals, including gazelles and zebra. More than 20 such species engage, or used to engage, in mass migrations. Of these migrations, those of the springbok, black wildebeest, blesbok, scimitar-horned oryx, and kulan have ceased. Long-distance migrations occur in some batsnotably the mass migration of the Mexican free-tailed bat between Oregon and southern Mexico. Migration is important in cetaceans, including whales, dolphins and porpoises; some species travel long distances between their feeding and their breeding areas. Humans are mammals, but human migration, as commonly defined, is when individuals often permanently change where they live, which does not fit the patterns described here. An exception is some traditional migratory patterns such as transhumance, in which herders and their animals move seasonally between mountains and valleys, and the seasonal movements of nomads. In other animals Among the reptiles, adult sea turtles migrate long distances to breed, as do some amphibians. Hatchling sea turtles, too, emerge from underground nests, crawl down to the water, and swim offshore to reach the open sea. Juvenile green sea turtles make use of Earth's magnetic field to navigate. Some crustaceans migrate, such as the largely-terrestrial Christmas Island red crab, which moves en masse each year by the millions. Like other crabs, they breathe using gills, which must remain wet, so they avoid direct sunlight, digging burrows to shelter from the sun. They mate on land near their burrows. The females incubate their eggs in their abdominal brood pouches for two weeks. Then they return to the sea to release their eggs at high tide in the moon's last quarter. The larvae spend a few weeks at sea and then return to land. Tracking Migration Scientists gather observations of animal migration by tracking their movements. Animals were traditionally tracked with identification tags such as bird rings for later recovery. However, no information was obtained about the actual route followed between release and recovery, and only a fraction of tagged individuals were recovered. More convenient, therefore, are electronic devices such as radio-tracking collars that can be followed by radio, whether handheld, in a vehicle or aircraft, or by satellite. GPS animal tracking enables accurate positions to be broadcast at regular intervals, but the devices are inevitably heavier and more expensive than those without GPS. An alternative is the Argos Doppler tag, also called a 'Platform Transmitter Terminal' (PTT), which sends regularly to the polar-orbiting Argos satellites; using Doppler shift, the animal's location can be estimated, relatively roughly compared to GPS, but at a lower cost and weight. A technology suitable for small birds which cannot carry the heavier devices is the geolocator which logs the light level as the bird flies, for analysis on recapture. There is scope for further development of systems able to track small animals globally. Radio-tracking tags can be fitted to insects, including dragonflies and bees. In culture Before animal migration was understood, various folklore and erroneous explanations were formulated to account for the disappearance or sudden arrival of birds in an area. In Ancient Greece, Aristotle proposed that robins turned into redstarts when summer arrived. The barnacle goose was explained in European Medieval bestiaries and manuscripts as either growing like fruit on trees, or developing from goose barnacles on pieces of driftwood. Another example is the swallow, which was once thought, even by naturalists such as Gilbert White, to hibernate either underwater, buried in muddy riverbanks, or in hollow trees.
Biology and health sciences
Ethology
null
9630226
https://en.wikipedia.org/wiki/Toei%20Subway
Toei Subway
The is one of two subway systems in Tokyo, the other being Tokyo Metro. The Toei Subway lines were originally licensed to the Teito Rapid Transit Authority (the predecessor of Tokyo Metro) but were constructed by the Tokyo Metropolitan Government following transfers of the licenses for each line. The subway has run at a financial loss for most of its history due to high construction expenses, particularly for the Oedo Line. However, it reported its first net profit of ¥3.13bn in FY2006. The Toei Subway is operated by the Tokyo Metropolitan Bureau of Transportation. Tokyo Metro and Toei trains form completely separate networks. While users of prepaid rail passes can freely interchange between the two networks, regular ticket holders must purchase a second ticket, or a special transfer ticket, to change from a Toei line to a Tokyo Metro line and vice versa. The sole exceptions are on the segment of the Toei Mita Line between Meguro and Shirokane-Takanawa, where the platforms are shared with the Tokyo Metro Namboku Line, and at Kudanshita on the Shinjuku Line, where the platform is shared with the Tokyo Metro Hanzomon Line. At these stations, it is possible to change between the networks without passing through a ticket gate. Branding Apart from its own logo, a stylized ginkgo leaf used as the symbol of the Tokyo Metropolis, Toei Subway shares a design language in common with Tokyo Metro. Lines are indicated by a letter in Futura Bold on a white background inside a roundel in the line color, with signs indicating stations adding the station number as well. Line colors and letter-designations are complementary with Tokyo Metro's, with none overlapping (e.g., the Mita Line's letter-designation is “I”, rather than “M”, which is used by the Tokyo Metro Marunouchi Line). Informational signage is also designed identically, with platform-level station placards differing only in the placement of the bands in the line color: Toei Subway has two thin bands at the top and bottom, while Tokyo Metro has one wider band at the bottom (or, in the case of long, narrow placards, in a continuous band extending to the left and right along the wall itself). Lines The Toei Subway is made up of four lines operating on of route. Two of the lines have different colors for their station signs: Asakusa (Vermilion ) and Shinjuku (Lime ). The Ōedo Line formerly had a darker magenta (O) as its designated color. Through services to other lines The different gauges of the Toei lines arose in part due to the need to accommodate through services with private suburban railway lines. Through services currently in regular operation include: Mita Line shares tracks of the section from Meguro to Shirokane-Takanawa with Tokyo Metro Namboku Line, . According to the company, an average of 2.34 million people used the company's four subway routes each day in 2008. The company made a profit of ¥12.2 billion in 2009. Stations There are a total of 99 unique stations (i.e., counting stations served by multiple lines only once) on the Toei Subway network, or 106 total stations if each station on each line counts as one station. Almost all stations are located within the 23 special wards, with many located in areas not served by the complementary Tokyo Metro network. Network map Rolling stock
Technology
Japan
null
9630609
https://en.wikipedia.org/wiki/Giant%20w%C4%93t%C4%81
Giant wētā
Giant wētā are several species of wētā in the genus Deinacrida of the family Anostostomatidae. Giant wētā are endemic to New Zealand and all but one species are protected by law because they are considered at risk of extinction. There are eleven species of giant wētā, most of which are larger than other wētā, despite the latter also being large by insect standards. Large species can be up to , not inclusive of legs and antennae, with body mass usually no more than . One gravid captive female reached a mass of about , making it one of the heaviest insects in the world and heavier than a sparrow. This is, however, abnormal, as this individual was unmated and retained an abnormal number of eggs. The largest species of giant wētā is the Little Barrier Island giant wētā, also known as the wētāpunga. Giant wētā tend to be less social and more passive than other wētā. Their genus name, Deinacrida, means "terrible grasshopper", from the Greek word δεινός (deinos, meaning "terrible", "potent", or "fearfully great"), in the same way dinosaur means "terrible lizard". They are found primarily on New Zealand offshore islands, having been almost exterminated on the mainland islands by introduced mammalian pests. Habitat and distribution Most populations of giant wētā have been in decline since humans began modifying the New Zealand environment. All but one giant wētā species is protected by law because they are considered at risk of extinction. Three arboreal giant wētā species are found in the north of New Zealand and now restricted to mammal-free habitats. This is because the declining abundance of most wētā species, particularly giant wētā, can be attributed to the introduction of mammalian predators, habitat destruction, and habitat modification by introduced mammalian browsers. New populations of some wētā have been established in locations, particularly on islands, where these threats have been eliminated or severely reduced in order to reduce the risk of extinction. Deinacrida heteracantha, and D. fallai are found only on near-shore islands that have no introduced predators (Te Hauturu-o-Toi and Poor Knights Island). The closely related species D. mahoenui is restricted to habitat fragments in North Island. Two closely related giant wētā species are less arboreal. Deinacrida rugosa is restricted to mammal-free reserves and D. parva is found near Kaikōura in the South Island of New Zealand. Many giant wētā species are alpine specialists. Five species are only found at high elevation in South Island. The scree wētā D. connectens lives about above sea level and freezes solid when temperatures drop below . Though the alpine species tend to be smaller on average than those other ground dwelling species. Species list Deinacrida carinata, Herekopare wētā Deinacrida connectens, Scree wētā Deinacrida elegans, Bluff wētā Deinacrida fallai, Poor Knights giant wētā Deinacrida heteracantha, Little Barrier Island giant wētā Deinacrida mahoenui, Mahoenui giant wētā Deinacrida parva, Kaikoura giant wētā Deinacrida pluvialis, Mt Cook giant wētā Deinacrida rugosa, Cook Strait giant wētā Deinacrida talpa, Giant mole wētā Deinacrida tibiospina, Mt Arthur giant wētā Mating and reproduction Scramble competition polygyny Giant wētā are observed to be a largely solitary genus, with little aggregation seen in mature individuals. Most species within the Deinacrida genus exhibit scramble competition polygyny, where male wētā travel to find mature females within an area. Males of species such as the alpine Scree Wētā (Deinacrida connectens) aim to detect as many females as possible to mate with, increasing their reproductive success. Strong phenotypic selection for movement ability benefits reproductive success of the males, as individuals who can cover greater distances are likely to gain more access to a greater quantity of females. Sexual dimorphism Research suggests a correlation between body size of the female Cook Straight giant wētā (Deinacrida rugosa) and quantities of sperm deposited by their male mates. Male wētā produce spermatophores (small packets containing sperm) which are transferred to the female wētā during the process of copulation. However, it has been established that males are transferring a higher quantity of spermatophores to the lighter females, when compared to their heavier counterparts, suggesting an intentional allocation of reproductive effort. Due to the scramble competition polygyny being prevalent in giant wētā populations, and larger females participating in more mating behaviours, there is an increased competition between the males mating with larger females. This is because the larger female wētā presumably mate more frequently, increasing competition between individual males for paternity. Previously, it was thought that male wētā would allocate more of their reproductive energy to larger females, as a lot of larger female invertebrates are more fertile/can produce a higher quantity of offspring at one time. However, this study indicates males may choose to supply the smaller females with more spermatophores as a way to ensure paternity and decrease the risk of sperm competition, which may also be true of other giant wētā species. Courtship and mating The mating systems observed in giant wētā species like the Scree Wētā (Deinacrida connectens) and Cook Strait giant wētā (Deinacrida rugosa) likely led to the development of sexual dimorphism. where males develop lighter, more slender bodies and longer legs allows them to cover distance more efficiently has developed. Similarly, males who have a larger overall body size may have a competitive advantage when engaging in scramble competitions with other males for access to females, through their ability to overpower smaller rivals. Despite establishing a mate pair overnight, the Little Barrier Giant Wētā (Deinacrida heteracantha) can be later found mating and engaging in pre-copulatory and post-copulatory behaviour during the day, despite being nocturnal. Similarly, it is implied that very little courtship behaviour occurs, but instead pairs engage in repetitive copulation to promote the maturation of eggs or spermatophores. Additionally, there is very limited information about parental care of giant wētā species, but similar species groups of ground weta (Hemiandrus) have shown that females provide their eggs and larvae with care, and males provide females with a spermatophylax to ensure she has essential nutrients to produce healthy young. It is likely that a similar process occurs in giant wētā species, particularly in ground dwelling species including D. connectens and D. rugosa. Diet Though the giant wētā have historically been depicted as New Zealand's rodent equivalent, their diets and morphology are drastically different. While most wētā species are omnivorous, the largest giant wētā usually follow a herbivorous diet. The scree wētā (Deinacrida connectens) have been observed consuming small fleshy fruits and dispersing the remaining seeds, however the dispersal rates of each scree wētā individual largely depended on its size. This may also be true of other giant wētā species, but there is no currently published supporting literature. Communication and social behaviours Recent studies have shown the use of vibrational communication between Cook Strait giant wētā as a display of intrasexual agonism. It has been seen that male individuals are producing low frequency sounds (~37 Hz) through a process called dorso-ventral tremulation, which then travel through different materials found in their environments including bark and leaflitter. The sound is produced by the males moving their bodies (specifically the abdominal region) in an up and down fashion and is used to signal competition to other males in the presence of a nearby female. It is implied that these sounds do not have a direct role in courtship (male-female) behaviour, but are solely a form of intrasexual competition. Additionally, it was found that the males who initiate the tremulation behaviour had greater mating success, rather than the individual with the more notable signal. Behaviour involving vibrational signals as a form of communication are widely observed in the Orthopteran order. Though it has not yet been described, it is likely that other giant wētā likely also display these vibrational communication behaviours. Threats New Zealand's endemic species have evolved over millions of years without the presence of mammalian predators, other than the two native bat species; the long-tailed bat (Chalinolobus tuberculatus) and the lesser short-tailed bat (Mystacina tuberculata) which primarily feed on pollen, nectar and invertebrates. This has meant many native species have lost their ability to avoid predation by flying over time through lack of necessity. This is particularly prominent in many birds such as the kiwi and insects including the wētā. Since humans began inhabiting New Zealand in ~1280 AD, there has been consistent introduction of mammalian and bird species, many of which are predators to native fauna. Other than introduced species, changes to the climate and habitat ranges have heavily impacted the populations of giant wētā. History of giant wētā conservation and future directions Mahoenui giant wētā In 1962, the presumably extinct Mahoenui giant wētā species were found in a small population in the central North Island of New Zealand. The population was found thriving in a patch of gorse (Ulex europaeus), an introduced plant species widely recognised as an invasive weed. However, due to its spiny nature, predators of the Mahoenui giant wētā were deterred, leaving the population to grow. Similarly, the gorse provided a habitat and food source for the giant wētā. Alongside the giant wētā surrounding the gorse were feral goats (Capra hircus) found to be feeding on the gorse plants, leading to their regeneration through digestion and excretion of the plant matter. This mutualistic relationship between the goats, gorse and the Mahoenui giant wētā has led to New Zealand's Department of Conservation turning the area into the Mahoenui Giant Wētā Scientific Reserve where all three species are protected. Little Barrier Island giant wētā (wētāpunga) Located to the north of Auckland city, Te Hauturu-o-Toi, also known as Little Barrier Island is the home to the largest of the giant weta species, the wētāpunga (Deinacrida heteracantha). Being New Zealand's oldest nature reserve becoming protected in 1895, the island has remained free of introduced rodents. This has allowed for large populations of wētāpunga to form free of disturbance. Due to introduced pests impacting other endemic species on other islands, there has been translocation of the wētāpunga to Tiritiri Matangi Island and Motuora Island in the Auckland region. As a result, the captively bred translocated individuals would act as a buffer if endangerment of the wētāpunga were to occur (due to infiltration by introduced species), while also helping maintain the native ecosystems involving other insects at these sites. Future directions for giant wētā conservation The future of conservation of endemic species in New Zealand, particularly giant wētā species rely on predator control and minimising habitat loss. Maintenance and strict control of pre-existing predator free islands such as Little Barrier Island and Tiritiri Matangi will allow populations of giant wētā species to grow, eventually allowing them to be translocated back to the mainland for repopulation once mammalian pests/predators have been minimised. Additionally, mitigating urbanisation in areas which giant wētā inhabit, providing information to the public about wētā, and further captive breeding and genetic management may help to prevent further endangerment.
Biology and health sciences
Orthoptera
Animals
3133158
https://en.wikipedia.org/wiki/Basket%20weaving
Basket weaving
Basket weaving (also basketry or basket making) is the process of weaving or sewing pliable materials into three-dimensional artifacts, such as baskets, mats, mesh bags or even furniture. Craftspeople and artists specialized in making baskets may be known as basket makers and basket weavers. Basket weaving is also a rural craft. Basketry is made from a variety of fibrous or pliable materials—anything that will bend and form a shape. Examples include pine, straw, willow (esp. osier), oak, wisteria, forsythia, vines, stems, fur, hide, grasses, thread, and fine wooden splints. There are many applications for basketry, from simple mats to hot air balloon gondolas. Many Indigenous peoples are renowned for their basket-weaving techniques. History While basket weaving is one of the widest spread crafts in the history of any human civilization, it is hard to say just how old the craft is, because natural materials like wood, grass, and animal remains decay naturally and constantly. So without proper preservation, much of the history of basket making has been lost and is simply speculated upon. Middle East The earliest reliable evidence for basket weaving technology in the Middle East comes from the Pre-Pottery Neolithic phases of Tell Sabi Abyad II and Çatalhöyük. Although no actual basketry remains were recovered, impressions on floor surfaces and on fragments of bitumen suggest that basketry objects were used for storage and architectural purposes. The extremely well-preserved Early Neolithic ritual cave site of Nahal Hemar yielded thousands of intact perishable artefacts, including basketry containers, fabrics, and various types of cordage. Additional Neolithic basketry impressions have been uncovered at Tell es-Sultan (Jericho), Netiv HaGdud, Beidha, Shir, Tell Sabi Abyad III, Domuztepe, Umm Dabaghiyah, Tell Maghzaliyah, Tepe Sarab, Jarmo, and Ali Kosh. The oldest known baskets were discovered in Faiyum in upper Egypt and have been carbon dated to between 10,000 and 12,000 years old, earlier than any established dates for archaeological evidence of pottery vessels, which were too heavy and fragile to suit far-ranging hunter-gatherers. The oldest and largest complete basket, discovered in the Negev in the Middle East, dates to 10,500 years old. However, baskets seldom survive, as they are made from perishable materials. The most common evidence of a knowledge of basketry is an imprint of the weave on fragments of clay pots, formed by packing clay on the walls of the basket and firing. Industrial Revolution During the Industrial Revolution, baskets were used in factories and for packing and deliveries. Wicker furniture became fashionable in Victorian society. World Wars During the World Wars some pannier baskets were used for dropping supplies of ammunition and food to the troops. Types Basketry may be classified into four types: Coiled basketry, using grasses, rushes and pine needles Plaiting basketry, using materials that are wide and braid-like: palms, yucca or New Zealand flax Twining basketry, using materials from roots and tree bark. This is a weaving technique where two or more flexible weaving elements ("weavers") cross each other as they weave through the stiffer radial spokes. Wicker and Splint basketry, using materials like reed, cane, willow, oak, and ash Materials used in basketry Weaving with rattan core (also known as reed) is one of the more popular techniques being practiced, because it is easily available. It is pliable, and when woven correctly, it is very sturdy. Also, while traditional materials like oak, hickory, and willow might be hard to come by, reed is plentiful and can be cut into any size or shape that might be needed for a pattern. This includes flat reed, which is used for most square baskets; oval reed, which is used for many round baskets; and round reed, which is used to twine; another advantage is that reed can also be dyed easily to look like oak or hickory. Many types of plants can be used to create baskets: dog rose, honeysuckle, blackberry briars once the thorns have been scraped off and many other creepers. Willow was used for its flexibility and the ease with which it could be grown and harvested. Willow baskets were commonly referred to as wickerwork in England. Water hyacinth is used as a base material in some areas where the plant has become a serious pest. For example, a group in Ibadan led by Achenyo Idachaba have been creating handicrafts in Nigeria. Other materials used in basketry include cedar bark, cedar root, spruce root, cattail leaves and tule. Some elements that may be used for decoration include maidenhair fern stems, horsetail root, red cherry bark and a variety of grasses. These materials vary widely in color and appearance. Vine Because vines have always been readily accessible and plentiful for weavers, they have been a common choice for basketry purposes. The runners are preferable to the vine stems because they tend to be straighter. Pliable materials like kudzu vine to more rigid, woody vines like bittersweet, grapevine, honeysuckle, wisteria and smokevine are good basket weaving materials. Although many vines are not uniform in shape and size, they can be manipulated and prepared in a way that makes them easily used in traditional and contemporary basketry. Most vines can be split and dried to store until use. Once vines are ready to be used, they can be soaked or boiled to increase pliability. Wicker The type of baskets that reed is used for are most often referred to as "wicker" baskets, though another popular type of weaving known as "twining" is also a technique used in most wicker baskets. Process The parts of a basket are the base, the side walls, and the rim. A basket may also have a lid, handle, or embellishments. Most baskets begin with a base. The base can either be woven with reed or wooden. A wooden base can come in many shapes to make a wide variety of shapes of baskets. The "static" pieces of the work are laid down first. In a round basket, they are referred to as "spokes"; in other shapes, they are called "stakes" or "staves". Then the "weavers" are used to fill in the sides of a basket. A wide variety of patterns can be made by changing the size, colour, or placement of a certain style of weave. To achieve a multi-coloured effect, aboriginal artists first dye the twine and then weave the twines together in complex patterns. Basketry around the world Asia South Asia Basketry exists throughout the Indian subcontinent. Since palms are found in the south, basket weaving with this material has a long tradition in Tamil Nadu and surrounding states. East Asia Chinese bamboo weaving, Taiwanese bamboo weaving, Japanese bamboo weaving and Korean bamboo weaving go back centuries. Bamboo is the prime material for making all sorts of baskets, since it is the main material that is available and suitable for basketry. Other materials that may be used are ratan and hemp palm. In Japan, bamboo weaving is registered as a traditional with a range of fine and decorative arts. Southeast Asia Southeast Asia has thousands of sophisticated forms of indigenous basketry produce, many of which use ethnic-endemic techniques. Materials used vary considerably, depending on the ethnic group and the basket art intended to be made. Bamboo, grass, banana, reeds, and trees are common mediums. Oceania Polynesia Basketry is a traditional practice across the Pacific islands of Polynesia. It uses natural materials like pandanus, coconut fibre, hibiscus fibre, and New Zealand flax according to local custom. Baskets are used for food and general storage, carrying personal goods, and fishing. Australia Basketry has been traditionally practised by the women of many Aboriginal Australian peoples across the continent for centuries. The Ngarrindjeri women of southern South Australia have a tradition of coiled basketry, using the sedge grasses growing near the lakes and mouth of the Murray River. The fibre basketry of the Gunditjmara people is noted as a cultural tradition, in the World Heritage Listing of the Budj Bim Cultural Landscape in western Victoria, Australia, used for carrying the short-finned eels that were farmed by the people in an extensive aquaculture system. North America Native American basketry Native Americans traditionally make their baskets from the materials available locally. Arctic and Subarctic Arctic and Subarctic tribes use sea grasses for basketry. At the dawn of the 20th century, Inupiaq men began weaving baskets from baleen, a substance derived from whale jaws, and incorporating walrus ivory and whale bone in basketry. Northeastern In Mi'kma'ki (composed of now Nova Scotia, New Brunswick and eastern Quebec, Canada), the Mi’kmaq used plants and animals for their fibre and dye sources in their basketry. Two archaeological sites revealed traditional materials of moose-tendon fibres, cattail plant (Typha latifolia), true rush (Scirpus lacustris), sweetgrass (Hierochloe odorata), American beach grass (Amophilia brevingulata), birch tree (Betula papyrifera), white cedar (Thuja occidentalis), basswood (Tilia Americana), black ash (Fraxinus nigra), white ash (Fraxinus americana), poplar (Populus tremuloides), and red maple (Acer rubrum). Black ash, or wosqoq, basketry is a vital part of Mi'kmaw culture and art. Baskets were functional, used in agriculture, and also decorative. Mi'kmaw basket makers were renowned for their intricate patterns woven in bright colours. In New England, traditional baskets are woven from Swamp Ash. The wood is peeled off a felled log in strips, following the growth rings of the tree. In Maine and the Great Lakes regions, traditional baskets are woven from black ash splints. Pack baskets from the Adirondack region have traditionally been woven from black ash or willow. Baskets are also woven from sweet grass, as is traditionally done by Canadian indigenous peoples. Birchbark is used throughout the Subarctic, by a wide range of peoples from the Dene to Ojibwa to Mi'kmaq. Birchbark baskets are often embellished with dyed porcupine quills. Some of the more notable styles are Nantucket Baskets and Williamsburg Baskets. Nantucket Baskets are large and bulky, while Williamsburg Baskets can be any size, so long as the two sides of the basket bow out slightly and get larger as it is woven up. Kelly Church (Grand Traverse Band of Ottawa and Chippewa Indians) Edith Bondie (Chippewa Indians) Southeastern Southeastern peoples, such as the Atakapa, Cherokee, Choctaw, and Chitimacha, traditionally use split river cane for basketry. A particularly difficult technique for which these peoples are known is double-weave or double-wall basketry, in which each basketry is formed by an interior and exterior wall seamlessly woven together. Doubleweave, although rare, is still practiced today, for instance by Mike Dart (Cherokee Nation). Rowena Bradley (Cherokee Nation) Mike Dart (Cherokee Nation) Northwestern Northwestern peoples use spruce root, cedar bark, and swampgrass. Ceremonial basketry hats are particularly valued by Northwest peoples and are worn today at potlatches. Traditionally, women wove basketry hats, and men painted designs on them. Delores Churchill is a Haida from Alaska who began weaving in a time when Haida basketry was in decline, but she and others have ensured it will continue by teaching the next generation. Delores Churchill (Haida) Joe Feddersen (Colville) Boeda Strand (Snohomish) Californian and Great Basin Indigenous peoples of California and Great Basin are known for their basketry skills. Coiled baskets are particularly common, woven from sumac, yucca, willow, and basket rush. The works by Californian basket makers include many pieces in museums. Elsie Allen (Pomo people) Mary Knight Benson (Pomo people) William Ralganal Benson (Pomo people) Carrie Bethel (Mono Lake Paiute) Loren Bommelyn (Tolowa) Nellie Charlie (Mono Lake Paiute/Kucadikadi) Louisa Keyser "Dat So La Lee" (Washoe people) is arguably the most famous Native American weaver. Lena Frank Dick (1889–1965) (Washoe people) followed behind Keyser by one generation, and her baskets were frequently mistaken for Keyser's. L. Frank (Tongva-Acagchemem) Sarah Jim Mayo (Washoe) Mabel McKay (Pomo people) Essie Pinola Parrish (Kashaya-Pomo) Lucy Telles (Mono Lake Paiute - Kucadikadi) Petra Pico (Ventureño Chumash) Southwestern Annie Antone (Tohono O'odham) Damian Jim (Navajo) Terrol Dew Johnson (Tohono O'odham) Mexico In northwestern Mexico, the Seri people continue to "sew" baskets using splints of the limberbush plant, Jatropha cuneata. Other North American basketry Matt Tommey is a North American artist who weaves sculptural baskets out of kudzu. Mary Jackson is a world-famous African-American sweetgrass basket weaver. In 2008, she was named a MacArthur Fellow for her basket weaving. Elizabeth F. Kinlaw is a North American basketweaver known for her sweetgrass baskets and whose work has been displayed in the Smithsonian Institution. Lydia Kear Whaley (1840 –1926) Appalachian basket weaver Europe In Greece, basket weaving is practiced by the anchorite monks of Mount Athos. Africa Senegal Wolof baskets are a coil basket created by the Wolof tribe of Senegal. These baskets is considered a women's craft, which have been passed across generations. The Wolof baskets were traditionally made by using thin cuts of palm frond and a thick grass called njodax; however contemporary Wolof baskets often incorporate plastic as a replacement for the palm fronds and/or re-use of discarded prayer mat materials. These baskets are strong and used for laundry hampers, planters, bowls, rugs, and more. South Africa Zulu baskets are a traditional craft in the KwaZulu-Natal province of South Africa and were used for utilitarian purposes including holding water, beer, or food; the baskets can take many months to weave. Starting in the late 1960s, Zulu basketry was a dying art form due to the introduction of tin and plastic water containers. Kjell Lofroth, a Swedish minister living in South Africa, noticed a decline in the local crafts, and after a drought in the KwaZulu-Natal province and he formed the Vukani Arts Association (English: wake up and get going) to financially support single women and their families. In this time period of the late 1960s, only three elderly women knew the craft of Zulu basket weaving but because of the Vukani Arts Association they taught others and revived the art. Beauty Ngxongo is the most renowned living Zulu basket weaver. Zulu telephone wire baskets are a contemporary craft. These are often brightly colored baskets and made with telephone wire (sometimes from a recycled source), which is a substitute for native grasses.
Technology
Techniques_2
null
3134585
https://en.wikipedia.org/wiki/Charge%20density
Charge density
In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density (symbolized by the Greek letter ρ) is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative. Like mass density, charge density can vary with position. In classical electromagnetic theory charge density is idealized as a continuous scalar function of position , like a fluid, and , , and are usually regarded as continuous charge distributions, even though all real charge distributions are made up of discrete charged particles. Due to the conservation of electric charge, the charge density in any volume can only change if an electric current of charge flows into or out of the volume. This is expressed by a continuity equation which links the rate of change of charge density and the current density . Since all charge is carried by subatomic particles, which can be idealized as points, the concept of a continuous charge distribution is an approximation, which becomes inaccurate at small length scales. A charge distribution is ultimately composed of individual charged particles separated by regions containing no charge. For example, the charge in an electrically charged metal object is made up of conduction electrons moving randomly in the metal's crystal lattice. Static electricity is caused by surface charges consisting of electrons and ions near the surface of objects, and the space charge in a vacuum tube is composed of a cloud of free electrons moving randomly in space. The charge carrier density in a conductor is equal to the number of mobile charge carriers (electrons, ions, etc.) per unit volume. The charge density at any point is equal to the charge carrier density multiplied by the elementary charge on the particles. However, because the elementary charge on an electron is so small (1.6⋅10−19 C) and there are so many of them in a macroscopic volume (there are about 1022 conduction electrons in a cubic centimeter of copper) the continuous approximation is very accurate when applied to macroscopic volumes, and even microscopic volumes above the nanometer level. At even smaller scales, of atoms and molecules, due to the uncertainty principle of quantum mechanics, a charged particle does not have a precise position but is represented by a probability distribution, so the charge of an individual particle is not concentrated at a point but is 'smeared out' in space and acts like a true continuous charge distribution. This is the meaning of 'charge distribution' and 'charge density' used in chemistry and chemical bonding. An electron is represented by a wavefunction whose square is proportional to the probability of finding the electron at any point in space, so is proportional to the charge density of the electron at any point. In atoms and molecules the charge of the electrons is distributed in clouds called orbitals which surround the atom or molecule, and are responsible for chemical bonds. Definitions Continuous charges Following are the definitions for continuous charge distributions. The linear charge density is the ratio of an infinitesimal electric charge dQ (SI unit: C) to an infinitesimal line element, similarly the surface charge density uses a surface area element dS and the volume charge density uses a volume element dV Integrating the definitions gives the total charge Q of a region according to line integral of the linear charge density λq(r) over a line or 1d curve C, similarly a surface integral of the surface charge density σq(r) over a surface S, and a volume integral of the volume charge density ρq(r) over a volume V, where the subscript q is to clarify that the density is for electric charge, not other densities like mass density, number density, probability density, and prevent conflict with the many other uses of λ, σ, ρ in electromagnetism for wavelength, electrical resistivity and conductivity. Within the context of electromagnetism, the subscripts are usually dropped for simplicity: λ, σ, ρ. Other notations may include: ρℓ, ρs, ρv, ρL, ρS, ρV etc. The total charge divided by the length, surface area, or volume will be the average charge densities: Free, bound and total charge In dielectric materials, the total charge of an object can be separated into "free" and "bound" charges. Bound charges set up electric dipoles in response to an applied electric field E, and polarize other nearby dipoles tending to line them up, the net accumulation of charge from the orientation of the dipoles is the bound charge. They are called bound because they cannot be removed: in the dielectric material the charges are the electrons bound to the nuclei. Free charges are the excess charges which can move into electrostatic equilibrium, i.e. when the charges are not moving and the resultant electric field is independent of time, or constitute electric currents. Total charge densities In terms of volume charge densities, the total charge density is: as for surface charge densities: where subscripts "f" and "b" denote "free" and "bound" respectively. Bound charge The bound surface charge is the charge piled up at the surface of the dielectric, given by the dipole moment perpendicular to the surface: where s is the separation between the point charges constituting the dipole, is the electric dipole moment, is the unit normal vector to the surface. Taking infinitesimals: and dividing by the differential surface element dS gives the bound surface charge density: where P is the polarization density, i.e. density of electric dipole moments within the material, and dV is the differential volume element. Using the divergence theorem, the bound volume charge density within the material is hence: The negative sign arises due to the opposite signs on the charges in the dipoles, one end is within the volume of the object, the other at the surface. A more rigorous derivation is given below. Free charge density The free charge density serves as a useful simplification in Gauss's law for electricity; the volume integral of it is the free charge enclosed in a charged object - equal to the net flux of the electric displacement field D emerging from the object: See Maxwell's equations and constitutive relation for more details. Homogeneous charge density For the special case of a homogeneous charge density ρ0, independent of position i.e. constant throughout the region of the material, the equation simplifies to: Proof Start with the definition of a continuous volume charge density: Then, by definition of homogeneity, ρq(r) is a constant denoted by ρq, 0 (to differ between the constant and non-constant densities), and so by the properties of an integral can be pulled outside of the integral resulting in: so, The equivalent proofs for linear charge density and surface charge density follow the same arguments as above. Discrete charges For a single point charge q at position r0 inside a region of 3d space R, like an electron, the volume charge density can be expressed by the Dirac delta function: where r is the position to calculate the charge. As always, the integral of the charge density over a region of space is the charge contained in that region. The delta function has the shifting property for any function f: so the delta function ensures that when the charge density is integrated over R, the total charge in R is q: This can be extended to N discrete point-like charge carriers. The charge density of the system at a point r is a sum of the charge densities for each charge qi at position ri, where : The delta function for each charge qi in the sum, δ(r − ri), ensures the integral of charge density over R returns the total charge in R: If all charge carriers have the same charge q (for electrons q = −e, the electron charge) the charge density can be expressed through the number of charge carriers per unit volume, n(r), by Similar equations are used for the linear and surface charge densities. Charge density in special relativity In special relativity, the length of a segment of wire depends on velocity of observer because of length contraction, so charge density will also depend on velocity. Anthony French has described how the magnetic field force of a current-bearing wire arises from this relative charge density. He used (p 260) a Minkowski diagram to show "how a neutral current-bearing wire appears to carry a net charge density as observed in a moving frame." When a charge density is measured in a moving frame of reference it is called proper charge density. It turns out the charge density ρ and current density J transform together as a four-current vector under Lorentz transformations. Charge density in quantum mechanics In quantum mechanics, charge density ρq is related to wavefunction ψ(r) by the equationwhere q is the charge of the particle and is the probability density function i.e. probability per unit volume of a particle located at r. When the wavefunction is normalized - the average charge in the region r ∈ R iswhere d3r is the integration measure over 3d position space. For system of identical fermions, the number density is given as sum of probability density of each particle in : Using symmetrization condition:where is considered as the charge density. The potential energy of a system is written as:The electron-electron repulsion energy is thus derived under these conditions to be:Note that this is excluding the exchange energy of the system, which is a purely quantum mechanical phenomenon, has to be calculated separately. Then, the energy is given using Hartree-Fock method as: Where I is the kinetic and potential energy of electrons due to positive charges, J is the electron electron interaction energy and K is the exchange energy of electrons. Application The charge density appears in the continuity equation for electric current, and also in Maxwell's Equations. It is the principal source term of the electromagnetic field; when the charge distribution moves, this corresponds to a current density. The charge density of molecules impacts chemical and separation processes. For example, charge density influences metal-metal bonding and hydrogen bonding. For separation processes such as nanofiltration, the charge density of ions influences their rejection by the membrane.
Physical sciences
Electrostatics
Physics
3136140
https://en.wikipedia.org/wiki/Mass%20flow%20%28life%20sciences%29
Mass flow (life sciences)
In the life sciences, mass flow, also known as mass transfer and bulk flow, is the movement of fluids down a pressure or temperature gradient. As such, mass flow is a subject of study in both fluid dynamics and biology. Examples of mass flow include blood circulation and transport of water in vascular plant tissues. Mass flow is not to be confused with diffusion which depends on concentration gradients within a medium rather than pressure gradients of the medium itself. Plant biology In general, bulk flow in plant biology typically refers to the movement of water from the soil up through the plant to the leaf tissue through xylem, but can also be applied to the transport of larger solutes (e.g. sucrose) through the phloem. Xylem According to cohesion-tension theory, water transport in xylem relies upon the cohesion of water molecules to each other and adhesion to the vessel's wall via hydrogen bonding combined with the high water pressure of the plant's substrate and low pressure of the extreme tissues (usually leaves). As in blood circulation in animals, (gas) embolisms may form within one or more xylem vessels of a plant. If an air bubble forms, the upward flow of xylem water will stop because the pressure difference in the vessel cannot be transmitted. Once these embolisms are nucleated , the remaining water in the capillaries begins to turn to water vapor. When these bubbles form rapidly by cavitation, the "snapping" sound can be used to measure the rate of cavitation within the plant . Plants do, however, have physiological mechanisms to reestablish the capillary action within their cells . Phloem Solute flow is driven by a difference in hydraulic pressure created from the unloading of solutes in the sink tissues. That is, as solutes are off-loaded into sink cells (by active or passive transport), the density of the phloem liquid decreases locally, creating a pressure gradient.
Physical sciences
Fluid mechanics
Physics
3136448
https://en.wikipedia.org/wiki/Rough-skinned%20newt
Rough-skinned newt
The rough-skinned newt or roughskin newt (Taricha granulosa) is a North American newt known for the strong toxin exuded from its skin. Appearance A stocky newt with rounded snout, it ranges from light brown to olive or brownish-black on top, with the underside, including the head, legs, and tail, a contrasting orange to yellow. The skin is granular, but males are smooth-skinned during breeding season. They measure in snout-to-vent length, and overall. They are similar to the California newt (Taricha torosa) but differ in having smaller eyes, yellow irises, V-shaped tooth patterns, and uniformly dark eyelids. Males can be distinguished from females during breeding season by large swollen vent lobes and cornified toe pads. Distribution and subspecies Habitats of rough-skinned newts are found throughout the Pacific Northwest. Their range extends south to Santa Cruz, California, and north to Alaska. They are uncommon east of the Cascade Mountains, though occasionally are found (and considered exotic, and possibly artificially introduced) as far as Montana. One isolated population lives in several ponds just north of Moscow, Idaho, and was most likely introduced. A number of subspecies have been defined based on local variants, but only two subspecies have wider recognition: Taricha granulosa – rough-skinned newt Taricha granulosa mazamae – Crater Lake newt (Crater Lake, Oregon) It is now believed that the Taricha granulosa mazamae subspecies is no longer valid, as specimens that look similar to T.g.m. have been found in areas of Alaska as well. Toxicity Many newts produce toxins from skin glands as a defense against predation, but the toxins of the genus Taricha are particularly potent. An acrid smell radiates from the newt, which acts as a warning for animals to stay away. Toxicity is generally experienced only if the newt is ingested, although some individuals have been reported to experience skin irritation after dermal contact, particularly if the eyes are touched after handling the animal without washing hands. In 1979, a 29-year-old man from Oregon died after ingesting a rough-skinned newt. Tetrodotoxin binding The newt produces a neurotoxin called tetrodotoxin (TTX), which in this species was formerly called "tarichatoxin". It is the same toxin found in pufferfish and a number of other marine animals. This toxin targets voltage-gated sodium channels via binding to distinct but allosterically coupled sites. Because TTX is much larger than a sodium ion, it acts like a cork in a bottle and prevents the flow of sodium. The reverse binding to sodium channels in nerve cells blocks electrical signals necessary for conducting nerve impulses. This inhibition of firing action potentials has the effect of inducing paralysis and death by asphyxiation. Toxin resistance and predation Throughout much of the newt's range, the common garter snake (Thamnophis sirtalis) has been observed to exhibit resistance to the tetrodotoxin produced in the newt's skin. While in principle the toxin binds to a tube-shaped protein that acts as a sodium channel in the snake's nerve cells, researchers have identified a genetic disposition in several snake populations where the protein is configured in such a way as to hamper or prevent binding of the toxin. In each of these populations, the snakes exhibit resistance to the toxin and successfully prey upon the newts. Successful predation of the rough-skinned newt by the common garter snake is made possible by the ability of individuals in a common garter snake population to gauge whether the newt's level of toxin is too high to feed on. T. sirtalis assays toxin levels of the rough-skinned newt and decides whether or not the levels are manageable by partially swallowing the newt, and either swallowing or releasing the newt. Toxin-resistant garter snakes are the only known animals today that can eat a rough-skinned newt and survive. Arms race In evolutionary theory, the relationship between the rough-skinned newt and the common garter snake is considered an example of co-evolution. The mutations in the snake's genes that conferred resistance to the toxin have resulted in a selective pressure that favors newts which produce more potent levels of toxin. Increases in the amount of newt then apply a selective pressure favoring snakes with mutations conferring even greater resistance. This cycle of a predator and prey evolving to one another is sometimes termed an evolutionary arms race because the two species compete in developing adaptations and counter adaptations against each other. This has resulted in the newts producing levels of toxin far in excess of what is needed to kill any other conceivable predator. Some newts secrete enough toxins to kill several adult humans. It appears that in some areas, the common garter snake has surpassed the newt in the evolutionary arms race by developing such a strong resistance to the toxin that the newt is unable to compete with its production of the toxin. There has been phylogenetic evidence that indicates elevated resistance to TTX has originated independently and only in certain species of garter snakes. The resistance has evolved in at least two unrelated species in the genus Thamnophis and at least twice within T. sirtalis. Toxin effect The toxin, when injected into animals, may not kill resistant animals; however, they are normally slowed down by its toxic effects. In snakes, individuals who showed some resistance tended to move slower after TTX injection, while those with less resistance become paralyzed. Newts are not immune to their own toxin; they only have a heightened resistance. The toxin in newts consists of a tradeoff. Each time they release the toxin, they inject themselves with a few milligrams. The TTX becomes concentrated in certain parts of the tissue after passing through cell membranes. As a result of tissue exposure to the toxin, newts have evolved a protection mechanism via a single amino acid substitution to the voltage-gated sodium channel normally affected by TTX. Puffer fishes show a similar amino acid sequence that allows them to survive from their own toxin exposure. Predation on newts by T. sirtalis also shows evidence that tetrodotoxin may serve as protection of eggs by the mother. While TTX is mainly located in the glands of the skin, the rough-skinned newt, as well as some other amphibians also possesses TTX in the ovaries and eggs. The higher the skin toxin levels were in the female, the higher the toxin level found in the egg. This is evidence that high toxin levels of the skin may, in fact, be under indirect selection. Since egg toxin levels would ultimately increase the survivability from predators, such as the garter snake, of the offspring, egg toxin levels may be under direct selection by mates, which is detectable via skin toxin levels. Predator avoidance The rough skinned newt uses a form of chemical based avoidance behavior to avoid being eaten by predators, mainly the common garter snake. The snakes, after swallowing, digesting, and metabolizing a rough-skinned newt, release a chemical signature. This stimulus can be detected by a nearby newt and trigger an avoidant response, which allows them to minimize predation risks. In this way, newts are able to differentiate whether a snake is resistant or sensitive to the toxin in order to avoid being preyed upon. However, newts do not avoid the corpses of a recently digested newt that has been left to decompose. This behavior is unlike salamanders that have been documented in avoiding other injured salamanders. Parasites Parasites include the trematode Halipegus occidualis, the adult form of which may infest the newt's esophagus and the anterior of its stomach.
Biology and health sciences
Salamanders and newts
Animals
3137635
https://en.wikipedia.org/wiki/Sumatran%20orangutan
Sumatran orangutan
The Sumatran orangutan (Pongo abelii) is one of the three species of orangutans. Critically endangered, and found only in the north of the Indonesian island of Sumatra, it is rarer than the Bornean orangutan but more common than the recently identified Tapanuli orangutan, also found in Sumatra. Its common name is based on two separate local words, orang 'people; person' and hutan 'forest', derived from Malay, and translates as 'person of the forest'. Description Male Sumatran orangutans grow to about tall and , while females are smaller, averaging and . Compared to the Bornean species, Sumatran orangutans are thinner and have longer faces; their hair is longer and has a paler red color. Evolution Fossil orangutans in Sumatra from the Pleistocene had similar diets to present day Sumatran orangutans, consisting mainly of soft fruit as evidenced by dental microwear. Behaviour and ecology Compared with the Bornean orangutan, the Sumatran orangutan tends to be more frugivorous and especially insectivorous. Preferred fruits include figs and jackfruits. It will also eat bird eggs and small vertebrates. Sumatran orangutans spend far less time feeding on the inner bark of trees. Wild Sumatran orangutans in the Suaq Balimbing swamp have been observed using tools. An orangutan will break off a tree branch that is about a foot long, snap off the twigs and fray one end with its teeth. The orangutan will use the stick to dig in tree holes for termites. They will also use the stick to poke a bee's nest wall, move it around and catch the honey. In addition, orangutans use tools to eat fruit. When the fruit of the Neesia tree ripens, its hard, ridged husk softens until it falls open. Inside are seeds that the orangutans enjoy eating, but they are surrounded by fiberglass-like hairs that are painful if eaten. Tools are created differently for different uses. Sticks are often made longer or shorter depending on whether they will be used for insects or fruits. If a particular tool proves useful, the orangutan will often save it. Over time, they will collect entire "toolboxes". A Neesia-eating orangutan will select a five-inch stick, strip off its bark, and then carefully collect the hairs with it. Once the fruit is safe, the ape will eat the seeds using the stick or its fingers. Although similar swamps can be found in Borneo, wild Bornean orangutans have not been seen using these types of tools. NHNZ filmed the Sumatran orangutan for its show Wild Asia: In the Realm of the Red Ape; it showed one of them using a simple tool, a twig, to pry food from difficult places. There is also a sequence of an animal using a large leaf as an umbrella in a tropical rainstorm. As well as being used as tools, tree branches are a means of transportation for the Sumatran orangutan. The orangutans are the heaviest mammals to travel by tree, which makes them particularly susceptible to the changes in arboreal compliance. To deal with this, their locomotion is characterized by slow movement, long contact times, and an impressively large array of locomotors postures. Orangutans have even been shown to utilize the compliance in vertical supports to lower the cost of locomotion by swaying trees back and forth and they possess unique strategies of locomotion, moving slowly and using multiple supports to limit oscillations in compliant branches, particularly at their tips. The Sumatran orangutan is also more arboreal than its Bornean cousin; this could be because of the presence of large predators, like the Sumatran tiger. It moves through the trees by quadrumanous locomotion and semibrachiation. As of 2017, the Sumatran orangutan species only has approximately 13,846 remaining members in its population. The World Wide Fund for Nature is thus carrying out attempts to protect the species by allowing them to reproduce in the safe environment of captivity. However, this comes at a risk to the Sumatran orangutan's native behaviors in the wild. While in captivity, the orangutans are at risk to the "Captivity Effect": animals held in captivity for a prolonged period will no longer know how to behave naturally in the wild. Being provided with water, food, and shelter while in captivity and lacking all the challenges of living in the wild, captive behaviour becomes more exploratory in nature. A repertoire of 64 different gestures in use by orangutans has been identified, 29 of which are thought to have a specific meaning that can be interpreted by other orangutans the majority of the time. Six intentional meanings were identified: Affiliate/Play, Stop action, Look at/Take object, Share food/object, Co-locomote and Move away. Sumatran orangutans do not use sounds as part of their communication, which includes a lack of audible danger signals, but rather base their communication on gestures alone. In 2024, a wild Sumatran orangutan, called Rakus, was observed applying a paste made from chewed Fibraurea tinctoria leaves to a facial wound, a treatment which appeared to heal the wound weeks later. Life cycle The Sumatran orangutan has five stages of life that are characterized by different physical and behavioral features. The first of these stages is infancy, which lasts from birth to around 2.5 years of age. The orangutan weighs between 2 and 6 kilograms. An infant is identified by light pigmented zones around the eyes and muzzle in contrast to darker pigmentation on the rest of the face as well as long hairs that protrude outward around the face. During this time, the infant is always carried by the mother during travel, it is highly dependent on the mother for food, and also sleeps in the mother's nest. The next stage is called juvenilehood and takes place between 2.5 and 5 years of age. The orangutan weighs between 6 and 15 kilograms, and does not look dramatically different from an infant. Although it is still mainly carried by the mother, a juvenile will often play with peers and make small exploratory trips within the vision of the mother. Towards the end of this stage, the orangutan will stop sleeping in the mother's nest and will build its own nest nearby. From the ages of 5 to 8 years of age, the orangutan is in an adolescent stage of life. The orangutan weighs around 15–30 kilograms. The light patches on the face start to disappear, and eventually the face becomes completely dark. During this time, orangutans still have constant contact with their mothers, yet they develop a stronger relationship with peers while playing in groups. They are still young and act with caution around unfamiliar adults, especially males. At 8 years of age, female orangutans are considered fully developed and begin to have offspring of their own. Males, however, enter a stage called sub-adulthood. This stage lasts from 8 to around 13 or 15 years of age, and the orangutans weigh around 30 to 50 kilograms. Their faces are completely dark, and they begin to develop cheek flanges. Their beard starts to emerge, while the hair around their face shortens, and instead of pointing outwards, the face flattens along the skull. This stage marks sexual maturity in males, yet these orangutans are still socially undeveloped and will still avoid contact with adult males. Finally, male Sumatran orangutans reach adulthood at 13 to 15 years of age. They are extremely large animals, weighing between 50 and 90 kilograms, roughly the weight of a fully grown human. They have a fully grown beard, fully developed cheek callosities, and long hair. These orangutans have reached full sexual and social maturity and now only travel alone. Female Sumatran orangutans typically live 44–53 years in the wild, while males have a slightly longer lifespan of 47–58 years. Females are able to give birth up to 53 years of age, based on studies of menopausal cycles. Both males and females are usually considered healthy even at the end of their lifespans and can be identified as such by the regular abundance of hair growth and robust cheek pads. The Sumatran orangutan is more social than its Bornean counterpart; groups gather to feed on the mass amounts of fruit on fig trees. The Sumatran orangutan community is best described as loose, not showing social or spatial exclusivity. Groups generally consist of female clusters and a preferred male mate. However, adult males generally avoid contact with other adult males. Subadult males will try to mate with any female, although mostly unsuccessfully, since mature females are easily capable of fending them off. Mature females prefer to mate with mature males. Usually, there is a specific male in a group that mature females will exhibit preference for. Male Sumatran orangutans sometimes have a delay of many years in the development of secondary sexual characteristics, such as cheek flanges and muscle mass. Males exhibit bimaturism, whereby fully flanged adult males and the smaller unflanged males are both capable of reproducing, but employ differing mating strategies to do so. The average interbirth rates for the Sumatran orangutan is 9.3 years, the longest reported among the great apes, including the Bornean orangutan. Infant orangutans will stay close to their mothers for up to three years. Even after that, the young will still associate with their mothers. Both the Sumatran and Bornean orangutans are likely to live several decades; estimated longevity is more than 50 years. The average age of the first reproduction of male P. abelii is around 15.4 years old. There is no indication of menopause. Nonja, thought to be the world's oldest orangutan in captivity or the wild at the time of her death, died at the Miami MetroZoo at the age of 55. Puan, an orangutan at Perth Zoo, is believed to have been 62 years old at the time of her death, making her the oldest recorded orangutan. The current oldest orangutan in the world is believed to be Bella, a female orangutan at the Hagenbeck Zoo, who is 61 years of age. Diet Sumatran orangutans are primarily frugivores, favoring fruits consisting of a large seed and surrounded by a fleshy substance, such as durians, lychees, jackfruit, breadfruit, and fig fruits. Insects are also a huge part of the orangutan's diet; the most consumed types are ants, predominantly of the genus Camponotus (at least four species indet.). Their main diet can be broken up into five categories: fruits, insects, leaf material, bark and other miscellaneous food items. Studies have shown that orangutans in the Ketambe area in Indonesia ate over 92 different kinds of fruit, 13 different kinds of leaves, 22 sorts of other vegetable material such as top-sprouts, and pseudo-bulbs of orchids. Insects included in the diet are numbered at least 17 different types. Occasionally soil from termite mounds were ingested in small quantities. When there is low ripe fruit availability, Sumatran orangutans will eat the meat of the slow loris, a nocturnal primate. Water consumption for the orangutans was ingested from natural bowls created in the trees they lived around. They even drank water from the hair on their arms when rainfall was heavy. Meat-eating Meat-eating happens rarely in Sumatran orangutan, and orangutans do not show a male bias in meat-eating. Research in the Ketambe area reported cases of meat-eating in wild Sumatran orangutans, of which nine cases were of orangutans eating slow lorises. The research shows, in the most recent three cases of slow lorises eaten by Sumatran orangutan, a maximum mean feeding rate of the adult orangutan for an entire adult male slow loris is 160.9 g/h and, of the infant, 142.4 g/h. No cases have been reported during mast years, which suggests orangutans take meat as a fallback for the seasonal shortage of fruits; preying on slow loris occurs more often in periods of low fruit availability. Similar to most primate species, orangutans appear to only share meat between mother and infants. Genomics Orangutans have 48 chromosomes. The Sumatran orangutan genome was sequenced in , based on a captive female named Susie. Following humans and chimpanzees, the Sumatran orangutan has become the third extant hominid species to have its genome sequenced. The researchers also published less complete copies from ten wild orangutans, five from Borneo and five from Sumatra. The genetic diversity was found to be lower in Bornean orangutans (Pongo pygmaeus) than in Sumatran ones (Pongo abelii), despite the fact that Borneo is home to six or seven times as many orangutans as Sumatra. The comparison has shown these two species diverged around 400,000 years ago, more recently than was previously thought. The orangutan genome also has fewer rearrangements than the chimpanzee/human lineage. Conservation Threats Sumatrans encounter threats such as logging (both legal and illegal), wholesale conversion of forest to agricultural land and oil palm plantations, and fragmentation by roads. Oil companies use a method of deforestation to re-use land for palm oil. This land is taken from the forest in which Sumatran orangutans live. An assessment of forest loss in the 1990s concluded that forests supporting at least 1,000 orangutans were lost each year within the Leuser Ecosystem alone. As of 2017, approximately 82.5% of the Sumatran orangutan population was strictly confined to the northernmost tip of the island, in the Aceh Province. Orangutans are rarely, if ever, found south of the Simpang Kanan River on Sumatra's west side or south of the Asahan River on the east side. The Pakpak Barat population in particular is the only Sumatran population predicted to be able to sustain orangutans in the long run, given the current effects of habitat displacement and human impact. While poaching generally is not a huge problem for the Sumatrans, occasional local hunting does decrease the population size. They have been hunted in the Northern Sumatra in the past as targets for food; although deliberate attempts to hunt the Sumatrans are rare nowadays, locals such as the Batak people are known to eat almost all vertebrates in their area. Additionally, the Sumatrans are treated as pests by Sumatran farmers, becoming targets of elimination if they are seen damaging or stealing crops. For commercial aspects, hunts for both dead or live specimens have also been recorded as an effect of the demand by European and North American zoos and institutions throughout the 20th century. Sumatran orangutans have developed a highly functioning cardiovascular system. However, with this development of hugely improved air sacs in their lungs, air sacculitis has become more prevalent among orangutans in this species. Air sacculitis is similar to streptococcal infection, e.g. strep throat in Homo sapiens. The bacterial infection is becoming increasingly common in captive orangutans, due to the fact that they are exposed to the human strain of Streptococcus in captivity. At first, both strains are treated and cured with antibiotics along with rest. Yet, in 2014, a Sumatran orangutan, ten years in captivity, was the first of its species to die from Streptococcus anginosus. This remains the only known case, but raises the question of why the known human cure for Streptococcus was ineffective in this case. Conservation status The Sumatran orangutan is endemic to the north of Sumatra. In the wild, Sumatran orangutans only survive in the province of Nanggroe Aceh Darussalam (NAD), the northernmost tip of the island. The primate was once more widespread, as they were found farther to the south in the 19th century, such as in Jambi and Padang. There are small populations in the North Sumatra province along the border with NAD, particularly in the Lake Toba forests. A survey in the Lake Toba region found only two inhabited areas, Bukit Lawang (defined as the animal sanctuary) and Gunung Leuser National Park. Bukit Lawang is a jungle village, northwest of Medan, situated at the eastern side of Gunung Leuser National Park. An orangutan sanctuary was set up here by a Swiss organisation in the 1970s to attempt to rehabilitate orangutans captured from the logging industry. The rangers were trained to teach the orangutans vital jungle skills to enable them to reintegrate into the forest, and provided additional supplementary food from a feeding platform. However, within the last few years, supplementary feeding has ceased as the orangutan rehabilitation program has been deemed a success, the orangutans having been fully rehabilitated, and the jungle (or the remaining part of) is now at saturation point, so the sanctuary no longer accepts new orphaned orangutans. The species has been assessed as critically endangered on the IUCN Red List since 2000. From 2000–2008 it was considered one of "The World's 25 Most Endangered Primates." A survey published in March 2016 estimates a population of 14,613 Sumatran orangutans in the wild, doubling previous population estimates. A survey in 2004 estimated that around 7,300 Sumatran orangutans still live in the wild. The same study estimates a occupied area for the Sumatran orangutans, of which only an approximate area range of harbors permanent populations. Some of them are being protected in five areas in Gunung Leuser National Park; others live in unprotected areas: northwest and northeast Aceh block, West Batang Toru river, East Sarulla and Sidiangkat. A successful breeding program has been established in Bukit Tiga Puluh National Park in Jambi and Riau provinces. Two strategies that are recently being considered to conserve this species are 1) rehabilitation and reintroduction of ex-captive or displaced individuals and 2) the protection of their forest habitat by preventing threats such as deforestation and hunting. The former was determined to be more cost efficient for maintaining the wild orangutan populations, but comes with longer time scale of 10–20 years. The latter approach has better prospects for ensuring long-term stability of populations. This type of habitat conservation approach has been pursued by the World Wide Fund for Nature, who joined forces with several other organizations to stop the clearing of the biggest part of remaining natural forest close to the Bukit Tigapuluh National Park. In addition to the above extant wild populations, a new population is being established in the Bukit Tigapuluh National Park (Jambi and Riau Provinces) via the re-introduction of confiscated illegal pets. This population currently numbers around 70 individuals and is reproducing. However it has been concluded that forest conservation costs twelve times less than reintroducing orangutans into the wild, and conserves more biological diversity. Orangutans have large home ranges and low population densities, which complicates conservation efforts. Population densities depend to a large degree on the abundance of fruits with soft pulp. Sumatran orangutan will commute seasonally between lowland, intermediate, and highland regions, following fruit availability. Undisturbed forests with broader altitudinal range can thus sustain larger orangutan populations; conversely, the fragmentation and extensive clearance of forest ranges breaks up this seasonal movement. Sumatra currently has one of the highest deforestation rates in the world.
Biology and health sciences
Apes
Animals
13655986
https://en.wikipedia.org/wiki/Problem%20statement
Problem statement
A problem statement is a description of an issue to be addressed, or a condition to be improved upon. It identifies the gap between the current problem and goal. The first condition of solving a problem is understanding the problem, which can be done by way of a problem statement. Problem statements are used by most businesses and organizations to execute process improvement projects. Purpose The main purpose of a problem statement is to identify and explain the problem. Another function of the problem statement is as a communication device. Before the project begins, stakeholders verify the problem and goals are accurately described in the problem statement. Once approved, the project reviews it. This also helps define project scope. The problem statement is referenced throughout the project to establish focus within the project team and verify they stay on track. At the end of the project, it is revisited to confirm the solution indeed solves the problem. The problem statement does not define the solution or methods of reaching the solution, and only recognizes the gap between the problem and goal. Writing There are several basic elements that can be built into every problem statement. The problem statement should focus on the end user, and the statement should not be too broad or narrow. Problem statements usually follow a format. While there are several options, the following is a template often used in business analysis. Ideal: The desired state of the process or product. Reality: The current state of the process or product. Consequences: The impacts on the business if the problem is not fixed or improved upon. Proposal: Potential solutions.
Technology
Basics
null
16391238
https://en.wikipedia.org/wiki/Siliceous%20ooze
Siliceous ooze
Siliceous ooze is a type of biogenic pelagic sediment located on the deep ocean floor. Siliceous oozes are the least common of the deep sea sediments, and make up approximately 15% of the ocean floor. Oozes are defined as sediments which contain at least 30% skeletal remains of pelagic microorganisms. Siliceous oozes are largely composed of the silica based skeletons of microscopic marine organisms such as diatoms and radiolarians. Other components of siliceous oozes near continental margins may include terrestrially derived silica particles and sponge spicules. Siliceous oozes are composed of skeletons made from opal silica SiO2·nH2O, as opposed to calcareous oozes, which are made from skeletons of calcium carbonate (CaCO3·nH2O) organisms (i.e. coccolithophores). Silica (Si) is a bioessential element and is efficiently recycled in the marine environment through the silica cycle. Distance from land masses, water depth and ocean fertility are all factors that affect the opal silica content in seawater and the presence of siliceous oozes. Formation Biological uptake of marine silica Siliceous marine organisms, such as diatoms and radiolarians, use silica to form skeletons through a process known as biomineralization. Diatoms and radiolarians have evolved to uptake silica in the form of silicic acid, Si(OH)4. Once an organism has sequestered Si(OH)4 molecules in its cytoplasm, the molecules are transported to silica deposition vesicles where they are transformed into opal silica (B-SiO2). Diatoms and radiolarians have specialized proteins called silicon transporters that prevent mineralization during the sequestration and transportation of silicic acid within the organism. The chemical formula for biological uptake of silicic acid is: H4SiO4(aq) <-> SiO2*nH2O(s) + (2-n)H2O(l) Opal silica saturation state The opal silica saturation state increases with depth in the ocean due to dissolution of sinking opal particles produced in surface ocean waters, but still remains low enough that the reaction to form biogenic opal silica remains thermodynamically unfavorable. Despite the unfavorable conditions, organisms can use dissolved silicic acid to make opal silica shells through biologically controlled biomineralization. The amount of opal silica that makes it to the seafloor is determined by the rates of sinking, dissolution, and water column depth. Export of silica to the deep ocean The dissolution rate of sinking opal silica (B-SiO2) in the water column affects the formation of siliceous ooze on the ocean floor. The rate of dissolution of silica is dependent on the saturation state of opal silica in the water column and the dependent on re-packaging of opal silica particles within larger particles from the surface ocean. Re-packaging is the formation (and sometimes re-formation) of solid organic matter (usually fecal pellets) around opal silica. The organic matter protects against the immediate dissolution of opal silica into silicic acid, which allows for increased sedimentation of the seafloor. The opal compensation depth, similar to the carbonate compensation depth, occurs at approximately 6000 meters. Below this depth, there is greater dissolution of opal silica into silicic acid than formation of opal silica from silicic acid. Only four percent of opal silica produced in the surface ocean will, on average, be deposited to the seafloor, while the remaining 96% is recycled in the water column. Accumulation rates Siliceous oozes accumulate over long timescales. In the open ocean, siliceous ooze accumulates at a rate of approximately 0.01 mol Si m−2 yr−1. The fastest accumulation rates of siliceous ooze occur in the deep waters of the Southern Ocean (0.1 mol Si m−2 yr−1) where biogenic silica production and export is greatest.  The diatom and radiolarian skeletons that make up Southern Ocean oozes can take 20 to 50 years to sink to the sea floor. Siliceous particles may sink faster if they are encased in the fecal pellets of larger organisms.  Once deposited, silica continues to dissolve and cycle, delaying long term burial of particles until a depth of 10–20 cm in the sediment layer is reached. Marine chert formation When opal silica accumulates faster than it dissolves, it is buried and can provide a diagenetic environment for marine chert formation.  The processes leading to chert formation have been observed in the Southern Ocean, where siliceous ooze accumulation is the fastest.  Chert formation however can take tens of millions of years. Skeleton fragments from siliceous organisms are subject to recrystallization and cementation. Chert is the main fate of buried siliceous ooze and permanently removes silica from the oceanic silica cycle. Geographic locations Siliceous oozes form in upwelling areas that provide valuable nutrients for the growth of siliceous organisms living in oceanic surface waters. A notable example is in the Southern ocean, where the consistent upwelling of Indian, Pacific, and Antarctic circumpolar deep water has resulted in a contiguous siliceous ooze that stretches around the globe. There is a band of siliceous ooze that is the result of enhanced equatorial upwelling in Pacific Ocean sediments below the North Equatorial Current. In the subpolar North Pacific, upwelling occurs along the eastern and western sides of the basin from the Alaska current and the Oyashio Current. Siliceous ooze is present along the seafloor in these subpolar regions. Ocean basin boundary currents, such as the Humboldt Current and the Somali Current, are examples of other upwelling currents that favor the formation of siliceous ooze. Siliceous ooze is usually categorized based upon its composition. Diatomaceous oozes are predominantly formed of diatom skeletons and are typically found along continental margins in higher latitudes. Diatomaceous oozes are present in the Southern Ocean and the North Pacific Ocean. Radiolarian oozes are made mostly of radiolarian skeletons and are located mainly in tropical equatorial and subtropical regions. Examples of radiolarian ooze are the oozes of the equatorial region, subtropical Pacific region, and the subtropical basin of the Indian Ocean. A small surface area of deep sea sediment is covered by radiolarian ooze in the equatorial East Atlantic basin. Role in the oceanic silica cycle Deep seafloor deposition in the form of ooze is the largest long-term sink of the oceanic silica cycle (6.3 ± 3.6 Tmol Si year−1). As noted above, this ooze is diagenetically transformed into lithospheric marine chert. This sink is roughly balanced by silicate weathering and river inputs of silicic acid into the ocean. Biogenic silica production in the photic zone is estimated to be 240 ± 40 Tmol si year −1.  Rapid dissolution in the surface removes roughly 135 Tmol opal Si year−1, converting it back to soluble silicic acid that can be used again for biomineralization. The remaining opal silica is exported to the deep ocean in sinking particles. In the deep ocean, another 26.2 Tmol Si Year−1 is dissolved before being deposited to the sediments as opal silica.  At the sediment water interface, over 90% of the silica is recycled and upwelled for use again in the photic zone. The residence time on a biological timescale is estimated to be about 400 years, with each molecule of silica recycled 25 times before sediment burial. Siliceous oozes and carbon sequestration Diatoms are primary producers that convert carbon dioxide into organic carbon via photosynthesis, and export organic carbon from the surface ocean to the deep sea via the biological pump. Diatoms can therefore be a significant sink for carbon dioxide in surface waters. Due to the relatively large size of diatoms (when compared to other phytoplankton), they are able to take up more total carbon dioxide. Additionally, diatoms do not release carbon dioxide into the environment during formation of their opal silicate shells. Phytoplankton that build calcium-carbonate shells (i.e. coccolithophores) release carbon dioxide as a byproduct during shell formation, making them a less efficient sink for carbon dioxide. The opal silicate skeletons enhance the sinking velocity of diatomaceous particles (i.e. carbon) from the surface ocean to the seafloor. Iron fertilization experiments Atmospheric carbon dioxide levels have been increasing exponentially since the Industrial Revolution and researchers are exploring ways to mitigate atmospheric carbon dioxide levels by increasing the uptake of carbon dioxide in the surface ocean via photosynthesis. An increase in the uptake of carbon dioxide in the surface waters may lead to more carbon sequestration in the deep sea through the biological pump. The bloom dynamics of diatoms, their ballasting by opal silica, and various nutrient requirements have made diatoms a focus for carbon sequestration experiments. Iron fertilization projects like the SERIES iron-enrichment experiments have introduced iron into ocean basins to test if this increases the rate of carbon dioxide uptake by diatoms and ultimately sinking it to the deep ocean. Iron is a limiting nutrient for diatom photosynthesis in high-nutrient, low-chlorophyll areas of the ocean, thus increasing the amount of available iron can lead to a subsequent increase in photosynthesis, sometimes resulting in a diatom bloom. This increase removes more carbon dioxide from the atmosphere.  Although more carbon dioxide is being taken up, the carbon sequestration rate in deep sea sediments is generally low. Most of the carbon dioxide taken up during the process of photosynthesis is recycled within the surface layer several times before making it to the deep ocean to be sequestered. Paleo-oozes Before siliceous organisms During the Precambrian, oceanic silica concentrations were an order of magnitude higher than in modern oceans. The evolution of biosilicification is thought to have emerged during this time period. Siliceous oozes formed once silica-sequestering organisms such as radiolarians and diatoms began to flourish in the surface waters. Evolution of siliceous organisms Radiolaria Fossil evidence suggests that radiolarians first emerged during the late Cambrian as free-floating shallow water organisms. They did not become prominent in the fossil record until the Ordovician. Radiolarites evolved in upwelling regions in areas of high primary productivity and are the oldest known organisms capable of shell secretion. The remains of radiolarians are preserved in chert; a byproduct of siliceous ooze transformation. Major speciation events of radiolarians occurred during the Mesozoic. Many of those species are now extinct in the modern ocean. Scientists hypothesize that competition with diatoms for dissolved silica during the Cenozoic is the likely cause for the mass extinction of most radiolarian species. Diatoms The oldest well-preserved diatom fossils have been dated to the beginning of the Jurassic period. However, the molecular record suggests diatoms evolved at least 250 million years ago during the Triassic. As new species of diatoms evolved and spread, oceanic silica levels began to decrease. Today, there are an estimated 100,000 species of diatoms, most of which are microscopic (2-200 μm). Some early diatoms were larger, and could be between 0.2 and 22mm in diameter. The earliest diatoms were radial centrics, and lived in shallow water close to shore. These early diatoms were adapted to live on the benthos, as their outer shells were heavy and prevented them from free-floating. Free-floating diatoms, known as bipolar and multipolar centrics, began evolving approximately 100 million years ago during the Cretaceous. Fossil diatoms are preserved in diatomite (also known as diatomaceous earth), which is one of the by-products of the transformation from ooze to rock formation. As diatomaceous particles began to sink to the ocean floor, carbon and silica were sequestered along continental margins. The carbon sequestered along continental margins has become the major petroleum reserves of today. Diatom evolution marks a time in Earth's geologic history of significant removal of carbon dioxide from the atmosphere while simultaneously increasing atmospheric oxygen levels. How scientists use paleo-ooze Paleoceanographers study prehistoric oozes to learn about changes in the oceans over time. The sediment distribution and deposition patterns of oozes inform scientists about prehistoric areas of the oceans that exhibited prime conditions for the growth of siliceous organisms. Scientists examine paleo-ooze by taking cores of deep sea sediments. Sediment layers in these cores reveal the deposition patterns of the ocean over time. Scientists use paleo-oozes as tools so that they can better infer the conditions of the paleo oceans. Paleo-ooze accretion rates can be used to determine deep sea circulation, tectonic activity, and climate at a specific point in time. Oozes are also useful in determining the historical abundances of siliceous organisms. Burubaital Formation The Burubatial Formation, located in the West Balkhash region of Kazakhstan, is the oldest known abyssal biogenic deposit. The Burubaital Formation is primarily composed of chert which was formed over a period of 15 million years (late Cambrian-middle Ordovician). It is likely that these deposits were formed in an upwelling region in subequatorial latitudes. The Burubaital Formation is largely composed of radiolarites, as diatoms had yet to evolve at the time of its formation. The Burubaital deposits have led researchers to believe that radiolaria played a significant role in the late Cambrian silica cycle. The late Cambrian (497-485.4 mya) marks a time of transition for marine biodiversity and is the beginning of ooze accumulation on the seafloor. Distribution shifts during the Miocene A shift in the geographical distribution of siliceous oozes occurred during the Miocene. Sixteen million years ago there was a gradual decline in siliceous ooze deposits in the North Atlantic and a concurrent rise in siliceous ooze deposits in the North Pacific. Scientists speculate that this regime shift may have been caused by the introduction of Nordic Sea Overflow Water, which contributed to the formation of North Atlantic Deep Water (NADW). The formation of Antarctic Bottom Water (AABW) occurred at approximately the same time as the formation of NADW. The formation of NADW and AABW dramatically transformed the ocean, and resulted in a spatial population shift of siliceous organisms. Paleocene plankton blooms The Cretaceous-Tertiary boundary was a time of global mass extinction, commonly referred to as the K-T mass extinction. While most organisms were disappearing, marine siliceous organisms were thriving in the early Paleocene seas. One such example occurred in the waters near Marlborough, New Zealand. Paleo-ooze deposits indicate that there was a rapid growth of both diatoms and radiolarians at this time. Scientists believe that this period of high biosiliceous productivity is linked to global climatic changes. This boom in siliceous plankton was greatest during the first one million years of the Tertiary period and is thought to have been fueled by enhanced upwelling in response to a cooling climate and increased nutrient cycling due to a change in sea level.
Physical sciences
Sedimentology
Earth science
16392927
https://en.wikipedia.org/wiki/Bubonic%20plague
Bubonic plague
Bubonic plague is one of three types of plague caused by the bacterium Yersinia pestis. One to seven days after exposure to the bacteria, flu-like symptoms develop. These symptoms include fever, headaches, and vomiting, as well as swollen and painful lymph nodes occurring in the area closest to where the bacteria entered the skin. Acral necrosis, the dark discoloration of skin, is another symptom. Occasionally, swollen lymph nodes, known as "buboes", may break open. The three types of plague are the result of the route of infection: bubonic plague, septicemic plague, and pneumonic plague. Bubonic plague is mainly spread by infected fleas from small animals. It may also result from exposure to the body fluids from a dead plague-infected animal. Mammals such as rabbits, hares, and some cat species are susceptible to bubonic plague, and typically die upon contraction. In the bubonic form of plague, the bacteria enter through the skin through a flea bite and travel via the lymphatic vessels to a lymph node, causing it to swell. Diagnosis is made by finding the bacteria in the blood, sputum, or fluid from lymph nodes. Prevention is through public health measures such as not handling dead animals in areas where plague is common. While vaccines against the plague have been developed, the World Health Organization recommends that only high-risk groups, such as certain laboratory personnel and health care workers, get inoculated. Several antibiotics are effective for treatment, including streptomycin, gentamicin, and doxycycline. Without treatment, plague results in the death of 30% to 90% of those infected. Death, if it occurs, is typically within 10 days. With treatment, the risk of death is around 10%. Globally between 2010 and 2015 there were 3,248 documented cases, which resulted in 584 deaths. The countries with the greatest number of cases are the Democratic Republic of the Congo, Madagascar, and Peru. The plague is considered the likely cause of the Black Death that swept through Asia, Europe, and Africa in the 14th century and killed an estimated 50 million people, including about 25% to 60% of the European population. Because the plague killed so many of the working population, wages rose due to the demand for labor. Some historians see this as a turning point in European economic development. The disease is also considered to have been responsible for the Plague of Justinian, originating in the Eastern Roman Empire in the 6th century CE, as well as the third epidemic, affecting China, Mongolia, and India, originating in the Yunnan Province in 1855. The term bubonic is derived from the Greek word βουβών, meaning . Cause The bubonic plague is an infection of the lymphatic system, usually resulting from the bite of an infected flea, Xenopsylla cheopis (the Oriental rat flea). Several flea species carried the bubonic plague, such as Pulex irritans (the human flea), Xenopsylla cheopis, and Ceratophyllus fasciatus. Xenopsylla cheopis was the most effective flea species for transmission. The flea is parasitic on house and field rats and seeks out other prey when its rodent host dies. Rats were an amplifying factor to bubonic plague due to their common association with humans as well as the nature of their blood. The rat's blood allows the rat to withstand a major concentration of the plague. The bacteria form aggregates in the gut of infected fleas, and this results in the flea regurgitating ingested blood, which is now infected, into the bite site of a rodent or human host. Once established, the bacteria rapidly spread to the lymph nodes of the host and multiply. The fleas that transmit the disease only directly infect humans when the rat population in the area is wiped out from a mass infection. Furthermore, in areas with a large population of rats, the animals can harbor low levels of the plague infection without causing human outbreaks. With no new rat inputs being added to the population from other areas, the infection only spread to humans in very rare cases of overcrowding. Signs and symptoms After being transmitted via the bite of an infected flea, the Y. pestis bacteria become localized in an inflamed lymph node, where they begin to colonize and reproduce. Infected lymph nodes develop hemorrhages, which result in the death of tissue. Y. pestis bacilli can resist phagocytosis and even reproduce inside phagocytes and kill them. As the disease progresses, the lymph nodes can hemorrhage and become swollen and necrotic. Bubonic plague can progress to lethal septicemic plague in some cases. The plague is also known to spread to the lungs and become the disease known as the pneumonic plague. Symptoms appear two to seven days after getting bitten and they include: Chills General ill feeling (malaise) High fever > Muscle cramps Seizures Smooth, painful lymph gland swelling called a bubo, commonly found in the groin, but may occur in the armpits or neck, most often near the site of the initial infection (bite or scratch) Pain may occur in the area before the swelling appears Gangrene of the extremities such as toes, fingers, lips, and tip of the nose. The best-known symptom of bubonic plague is one or more infected, enlarged, and painful lymph nodes, known as buboes. Buboes associated with the bubonic plague are commonly found in the armpits, upper femoral area, groin, and neck region. These buboes will grow and become more painful over time, often to the point of bursting. Symptoms include heavy breathing, continuous vomiting of blood (hematemesis), aching limbs, coughing, and extreme pain caused by the decay or decomposition of the skin while the person is still alive. Additional symptoms include extreme fatigue, gastrointestinal problems, spleen inflammation, lenticulae (black dots scattered throughout the body), delirium, coma, organ failure, and death. Organ failure is a result of the bacteria infecting organs through the bloodstream. Other forms of the disease include septicemic plague and pneumonic plague, in which the bacterium reproduces in the person's blood and lungs respectively. Diagnosis Laboratory testing is required in order to diagnose and confirm plague. Ideally, confirmation is through the identification of Y. pestis culture from a patient sample. Confirmation of infection can be done by examining serum taken during the early and late stages of infection. To quickly screen for the Y. pestis antigen in patients, rapid dipstick tests have been developed for field use. Samples taken for testing include: Buboes: Swollen lymph nodes (buboes) characteristic of bubonic plague, a fluid sample can be taken from them with a needle. Blood: blood cultures test blood samples for bacteria to find source of infection Lungs: Spirometry test are used to screen the lungs for diseases that affect the airways. Chest X-rays of the lungs are also used as an effective method of diagnosis. Prevention Bubonic plague outbreaks are controlled by pest control and modern sanitation techniques. This disease uses fleas commonly found on rats as a vector to jump from animals to humans. The mortality rate is highest in the summer and early fall. The successful control of rat populations in dense urban areas is essential to outbreak prevention. One example is the use of a machine called the Sulfurozador, used to deliver sulphur dioxide to eradicate the pest that spread the bubonic plague in Buenos Aires, Argentina during the early 18th century. Targeted chemoprophylaxis, sanitation, and vector control also played a role in controlling the 2003 Oran outbreak of the bubonic plague. Another means of prevention in large European cities was a city-wide quarantine to not only limit interaction with people who were infected, but also to limit the interaction with the infected rats. Treatment Several classes of antibiotic are effective in treating bubonic plague. These include aminoglycosides such as streptomycin and gentamicin, tetracyclines (especially doxycycline), and the fluoroquinolone ciprofloxacin. Mortality associated with treated cases of bubonic plague is about 1–15%, compared to a mortality of 40–60% in untreated cases. People potentially infected with the plague need immediate treatment and should be given antibiotics within 24 hours of the first symptoms to prevent death. Other treatments include oxygen, intravenous fluids, and respiratory support. People who have had contact with anyone infected by pneumonic plague are given prophylactic antibiotics. Using the broad-based antibiotic streptomycin has proven to be dramatically successful against the bubonic plague within 12 hours of infection. Epidemiology Globally between 2010 and 2015, there were 3,248 documented cases, which resulted in 584 deaths. The countries with the greatest number of cases are the Democratic Republic of the Congo, Madagascar, and Peru. For over a decade since 2001, Zambia, India, Malawi, Algeria, China, Peru, and the Democratic Republic of the Congo had the most plague cases, with over 1,100 cases in the Democratic Republic of the Congo alone. From 1,000 to 2,000 cases are conservatively reported per year to the WHO. From 2012 to 2017, reflecting political unrest and poor hygienic conditions, Madagascar began to host regular epidemics. Between 1900 and 2015, the United States had 1,036 human plague cases, with an average of 9 cases per year. In 2015, 16 people in the western United States developed plague, including 2 cases in Yosemite National Park. These US cases usually occur in rural northern New Mexico, northern Arizona, southern Colorado, California, southern Oregon, and far western Nevada. In November 2017, the Madagascar Ministry of Health reported an outbreak to the WHO (World Health Organization) with more cases and deaths than any recent outbreak in the country. Unusually, most of the cases were pneumonic rather than bubonic. In June 2018, a child was confirmed to be the first person in Idaho to be infected by bubonic plague in nearly 30 years. A couple died in May 2019, in Mongolia, while hunting marmots. Another two people in the province of Inner Mongolia, China, were treated in November 2019 for the disease. In July 2020, in Bayannur, Inner Mongolia of China, a human case of bubonic plague was reported. Officials responded by activating a city-wide plague-prevention system for the remainder of the year. Also in July 2020, in Mongolia, a teenager died from bubonic plague after consuming infected marmot meat. History Yersinia pestis has been discovered in archaeological finds from the Late Bronze Age (~3800 BP). The bacteria is identified by ancient DNA in human teeth from Asia and Europe dating from 2,800 to 5,000 years ago. Some authors have suggested that the plague was responsible for the Neolithic decline. First pandemic The first recorded epidemic affected the Sasanian Empire and their arch-rivals, the Eastern Roman Empire (Byzantine Empire) and was named the Plague of Justinian (541–549 AD) after emperor Justinian I, who was infected but survived through extensive treatment. The pandemic resulted in the deaths of an estimated 25 million (6th century outbreak) to 50 million people (two centuries of recurrence). The historian Procopius wrote, in Volume II of History of the Wars, of his personal encounter with the plague and the effect it had on the rising empire. In the spring of 542, the plague arrived in Constantinople, working its way from port city to port city and spreading around the Mediterranean Sea, later migrating inland eastward into Asia Minor and west into Greece and Italy. The Plague of Justinian is said to have been "completed" in the middle of the 8th century. Because the infectious disease spread inland by the transferring of merchandise through Justinian's efforts in acquiring luxurious goods of the time and exporting supplies, his capital became the leading exporter of the bubonic plague. Procopius, in his work Secret History, declared that Justinian was a demon of an emperor who either created the plague himself or was being punished for his sinfulness. Second pandemic Medieval society's increasing population was put to deadly halt when, in the Late Middle Ages, Europe experienced the deadliest disease outbreak in history. They called it the Great Dying or The Great Pestilence, later coined The Black Death. Lasting in potency for roughly six years, 1346–1352, the Black Death claimed one-third of the European human population, with mortality rates as high as 70%–80%. Some historians believe that society subsequently became more violent as the mass mortality rate cheapened life and thus increased warfare, crime, popular revolt, waves of flagellants, and persecution. The Black Death originated in Central Asia and spread from Italy and then throughout other European countries. Arab historians Ibn Al-Wardni and Almaqrizi believed the Black Death originated in Mongolia. Chinese records also show a huge outbreak in Mongolia in the early 1330s. In 2022, researchers presented evidence that the plague originated near Lake Issyk-Kul in Kyrgyzstan. The Mongols had cut the trade route (the Silk Road) between China and Europe, which halted the spread of the Black Death from eastern Russia to Western Europe. The European epidemic may have begun with the siege of Caffa, an attack that Mongols launched on the Italian merchants' last trading station in the region, Caffa, in the Crimea. In late 1346, plague broke out among the besiegers and from them penetrated the town. The Mongol forces catapulted plague-infested corpses into Caffa as a form of attack, one of the first known instances of biological warfare. When spring arrived, the Italian merchants fled on their ships, unknowingly carrying the Black Death. Carried by the fleas on rats, the plague initially spread to humans near the Black Sea and then outwards to the rest of Europe as a result of people fleeing from one area to another. Rats migrated with humans, traveling among grain bags, clothing, ships, wagons, and grain husks. Continued research indicates that black rats, those that primarily transmitted the disease, prefer grain as a primary meal. Due to this, the major bulk grain fleets that transported major city's food shipments from Africa and Alexandria to heavily populated areas, and were then unloaded by hand, played a role in increasing the transmission effectiveness of the plague. Third pandemic The plague resurfaced for a third time in the mid-19th century; this is also known as "the modern pandemic". Like the two previous outbreaks, this one also originated in Eastern Asia, most likely in Yunnan, a province of China, where there are several natural plague foci. The initial outbreaks occurred in the second half of the 18th century. The disease remained localized in Southwest China for several years before spreading. In the city of Canton, beginning in January 1894, the disease had killed 80,000 people by June. Daily water traffic with the nearby city of Hong Kong rapidly spread the plague there, killing over 2,400 within two months during the 1894 Hong Kong plague. The third pandemic spread the disease to port cities throughout the world in the second half of the 19th century and the early 20th century via shipping routes. The plague infected people in Chinatown in San Francisco from 1900 to 1904, and in the nearby locales of Oakland and the East Bay again from 1907 to 1909. During the former outbreak, in 1902, authorities made permanent the Chinese Exclusion Act, a law originally signed into existence by President Chester A. Arthur in 1882. The Act was supposed to last for 10 years, but was renewed in 1892 with the Geary Act, then followed by the 1902 decision. The last major outbreak in the United States occurred in Los Angeles in 1924, though the disease is still present in wild rodents and can be passed to humans who come in contact with them. According to the World Health Organization, the pandemic was considered active until 1959, when worldwide casualties dropped to 200 per year. In 1994, a plague outbreak in five Indian states caused an estimated 700 infections (including 52 deaths) and triggered a large migration of Indians within India as they tried to avoid the disease. It was during the 1894 Hong Kong plague outbreak that Alexandre Yersin isolated the bacterium responsible (Yersinia pestis), a few days after Japanese bacteriologist Kitasato Shibasaburō had isolated it. However, the latter's description was imprecise and also expressed doubts of its relation to the disease, and thus the bacterium is today only named after Yersin. Society and culture The scale of death and social upheaval associated with plague outbreaks has made the topic prominent in many historical and fictional accounts since the disease was first recognized. The Black Death in particular is described and referenced in numerous contemporary sources, some of which, including works by Chaucer, Boccaccio, and Petrarch, are considered part of the Western canon. The Decameron, by Boccaccio, is notable for its use of a frame story involving individuals who have fled Florence for a secluded villa to escape the Black Death. First-person, sometimes sensationalized or fictionalized, accounts of living through plague years have also been popular across centuries and cultures. For example, Samuel Pepys's diary makes several references to his first-hand experiences of the Great Plague of London in 1665–66. Later works, such as Albert Camus's novel The Plague or Ingmar Bergman's film The Seventh Seal have used bubonic plague in settings, such as quarantined cities in either medieval or modern times, as a backdrop to explore themes including the breakdown of society, institutions, and individuals during the plague; the cultural and psychological existential confrontation with mortality; and the plague as an allegory raising contemporary moral or spiritual questions. Biological warfare Some of the earliest instances of biological warfare were said to have been products of the plague, as armies of the 14th century were recorded catapulting diseased corpses over the walls of towns and villages to spread the pestilence. This was done by Jani Beg when he attacked the city of Kaffa in 1343. Later, plague was used during the Second Sino-Japanese War as a bacteriological weapon by the Imperial Japanese Army. These weapons were provided by Shirō Ishii's units and used in experiments on humans before being used in the field. For example, in 1940, the Imperial Japanese Army Air Service bombed Ningbo with fleas carrying the bubonic plague. During the Khabarovsk War Crime Trials, the accused, such as Major General Kiyoshi Kawashima, testified that, in 1941, 40 members of Unit 731 air-dropped plague-contaminated fleas on Changde. These operations caused epidemic plague outbreaks. Continued research Substantial research has been done regarding the origin of the plague and how it traveled through the continent. Mitochondrial DNA of modern rats in Western Europe indicated that these rats came from two different areas, one being Africa and the other unclear. The research regarding this pandemic has greatly increased with technology. Through archaeo-molecular investigation, researchers have discovered the DNA of plague bacillus in the dental core of those that fell ill to the plague. Analysis of teeth of the deceased allows researchers to further understand both the demographics and mortuary patterns of the disease. For example, in 2013 in England, archeologists uncovered a burial mound to reveal 17 bodies, mainly children, who had died of the Bubonic plague. They analyzed these burial remains using radiocarbon dating to determine they were from the 1530s, and dental core analysis revealed the presence of Yersinia pestis. Other evidence for rats that are currently still being researched consists of gnaw marks on bones, predator pellets and rat remains that were preserved in situ. This research allows individuals to trace early rat remains to track the path traveled and in turn connect the impact of the Bubonic Plague to specific breeds of rats. Burial sites, known as plague pits, offer archaeologists an opportunity to study the remains of people who died from the plague. Another research study indicates that these separate pandemics were all interconnected. A current computer model indicates that the disease did not go away in between these pandemics. It rather lurked within the rat population for years without causing human epidemics.
Biology and health sciences
Infectious disease
null
16398712
https://en.wikipedia.org/wiki/Sedimentary%20structures
Sedimentary structures
Sedimentary structures include all kinds of features in sediments and sedimentary rocks, formed at the time of deposition. Sediments and sedimentary rocks are characterized by bedding, which occurs when layers of sediment, with different particle sizes are deposited on top of each other. These beds range from millimeters to centimeters thick and can even go to meters or multiple meters thick. Sedimentary structures such as cross-bedding, graded bedding, and ripple marks are utilized in stratigraphic studies to indicate original position of strata in geologically complex terrains and understand the depositional environment of the sediment. Flow structures There are two kinds of flow structures: bidirectional (multiple directions, back-and-forth) and unidirectional. Flow regimes in single-direction (typically fluvial) flow, which at varying speeds and velocities produce different structures, are called bedforms. In the lower flow regime, the natural progression is from a flat bed, to some sediment movement (saltation etc.), to ripples, to slightly larger dunes. Dunes have a vortex in the lee side of the dune. As the upper flow regime forms, the dunes become flattened out, and then produce antidunes. At higher still velocity, the antidunes are flattened and most sedimentation stops, as erosion takes over as the dominant process. Bedforms vs. flow Typical unidirectional bedforms represent a specific flow velocity, assuming typical sediments (sands and silts) and water depths, and a chart such as below can be used for interpreting depositional environments, with increasing water velocity going down the chart. Ripple marks Ripple marks usually form in conditions with flowing water, in the lower part of the Lower Flow Regime. There are two types of ripple marks: Symmetrical ripple marks Often found on beaches, they are created by a two way current, for example the waves on a beach (swash and backwash). This creates ripple marks with pointed crests and rounded troughs, which aren't inclined more to a certain direction. Three common sedimentary structures that are created by these processes are herringbone cross-stratification, flaser bedding, and interference ripples. Asymmetrical ripple marks These are created by a one way current, for example in a river, or the wind in a desert. This creates ripple marks with still pointed crests and rounded troughs, but which are inclined more strongly in the direction of the current. For this reason, they can be used as palaeocurrent indicators. Antidunes Antidunes are the sediment bedforms created by fast, shallow flows of water with a Froude number greater than 1. Antidunes form beneath standing waves of water that periodically steepen, migrate, and then break upstream. The antidune bedform is characterized by shallow foresets, which dip upstream at an angle of about ten degrees that can be up to five meters in length. They can be identified by their low angle foresets. For the most part, antidunes bedforms are destroyed during decreased flow, and therefore cross bedding formed by antidunes will not be preserved. Biological structures A number of biologically-created sedimentary structures exist, called trace fossils. Examples include burrows and various expressions of bioturbation. Ichnofacies are groups of trace fossils that together help give information on the depositional environment. In general, as deeper (into the sediment) burrows become more common, the shallower the water. As (intricate) surface traces become more common, the water becomes deeper. Microbes may also interact with sediment to form microbially induced sedimentary structures. Soft sediment deformation structures Soft-sediment deformation structures or SSD, is a consequence of the loading of wet sediment as burial continues after deposition. The heavier sediment "squeezes" the water out of the underlying sediment due to its own weight. There are three common variants of SSD: load structures or load casts (also a type of sole marking) are blobs that form when a denser, wet sediment slumps down on and into a less dense sediment below. pseudonodules or ball-and-pillow structures, are pinched-off load structures; these may also be formed by earthquake energy and referred to as seismites. flame structures, "fingers" of mud that protrude into overlying sediments. clastic dikes are seams of sedimentary material that cut across sedimentary strata. Bedding plane structures Bedding Plane Structures are commonly used as paleocurrent indicators. They are formed when sediment has been deposited and then reworked and reshaped. They include: Sole markings form when an object gouges the surface of a sedimentary layer; this groove is later preserved as a cast when filled in by the layer above. They include: Flute casts are scours dug into soft, fine sediment which typically get filled by an overlying bed. Measuring the long axis of the flute cast gives the direction of flow, with the scoop-shaped end pointing in the upcurrent direction and the tapered end pointing downcurrent (paleoflow direction). The convexity of the flute cast also points stratigraphically down. Tool marks are a type of sole marking formed by grooves left in a bed by objects dragged along by a current. The average direction of these can be assumed to be the axis of flow direction. Mudcracks form when mud is dewatered, shrinks, and leaves a crack. This tells you that the mud was saturated with water and then exposed to air. Mudcracks curl upwards, so they can be used as geopetal structures. Syneresis cracks form in a similar way, with the exception that they are never exposed to air, instead being caused by changes in the salinity of the surrounding water. Raindrop impressions form on exposed sediment by raindrop impacts. Parting lineations are subtly aligned minerals that form in the lower part of the Upper Flow Regime within plane beds. Bomb sag or bedding-plane sag is downwards deformation of tuff beds or other deposits where a volcanic bomb or volcanic block has fallen. Within bedding structures These structures are within sedimentary bedding and can help with the interpretation of depositional environment and paleocurrent directions. They are formed when the sediment is deposited. Cross-beddingCross-bedding is the layering of beds deposited by wind or water inclined at an angle as much as 35° from the horizontal. Cross-beds form when sediment particles are deposited on steeper slopes of sand dunes on land or of sandbars in rivers and on the seafloor. Cross-bedding in wind-deposited dunes can be complex as a result of fast changing wind directions. Hummocky cross-stratification This stratification is made up of undulating sets of cross-laminae that are concave-up (swales) and convex-up (hummocks). These cross-beds gently cut into each other with curved erosional surfaces. They form in shallow-water, storm-dominated environments. Strong storm-wave action erodes the seabed into low hummocks and swales that lack a specific orientation. Imbrication This structure is formed by the stacking of larger clasts in the direction of flow. Normal graded bedding This structure occurs when current velocity changes and grains are progressively dropped out of the current. The most common place to find this is in a turbidite deposit. This can also be inverted, called reversed graded bedding, and is common in debris flows. Bioturbation In many sedimentary rocks, the bedding is broken by cylindrical tubes a few centimeters in diameter that extend vertically through multiple beds. These sedimentary structures are remnants of burrows and tunnels excavated by marine organisms that live on the ocean floor. These organisms churn and burrow through mud and sand a process called bioturbation. They ingest the sediment, digest the organic matter, and leave behind the remnants which fills the burrow. Tidal bundle Variation in bedding thickness in a tidal environment caused by alternation of spring and neap tides. Secondary sedimentary structures Secondary sedimentary structures form after primary deposition occurs or, in some cases, during the diagenesis of a sedimentary rock. Common secondary structures include any form of bioturbation, soft-sediment deformation, teepee structures, root-traces, and soil mottling. Liesegang rings, cone-in-cone structures, raindrop impressions, and vegetation-induced sedimentary structures would also be considered secondary structures. Secondary structures include fluid escape structures, formed when fluids escape from a sedimentary bed after deposition. Examples of fluid escape structures include dish structures, pillar structures, and vertical sheet structures.
Physical sciences
Sedimentology
Earth science
16399074
https://en.wikipedia.org/wiki/Virgocentric%20flow
Virgocentric flow
The Virgocentric flow (VCF) is the preferred movement of Local Group galaxies towards the Virgo cluster caused by its overwhelming gravity, which separates bound objects from the Hubble flow of cosmic expansion. The VCF can refer to the Local Group's movement towards the Virgo Cluster, since its center is considered synonymous with the Virgo cluster, but more tedious to ascertain due to its much larger volume. The excess velocity of Local Group galaxies towards, and with respect to, the Virgo Cluster are 100 to 400 km/s. This excess velocity is referred to as each galaxy's peculiar velocity.
Physical sciences
Notable galaxies
Astronomy
2275696
https://en.wikipedia.org/wiki/Saiga%20antelope
Saiga antelope
The saiga antelope (, Saiga tatarica), or saiga, is a species of antelope which during antiquity inhabited a vast area of the Eurasian steppe, spanning the foothills of the Carpathian Mountains in the northwest and Caucasus in the southwest into Mongolia in the northeast and Dzungaria in the southeast. During the Pleistocene, it ranged across the mammoth steppe from the British Isles to Beringia. Today, the dominant subspecies (S. t. tatarica) only occurs in Kalmykia and Astrakhan Oblast of Russia and in the Ural, Ustyurt and Betpak-Dala regions of Kazakhstan. A portion of the Ustyurt population migrates south to Uzbekistan and occasionally to Turkmenistan in winter. It is regionally extinct in Romania, Ukraine, Moldova, China and southwestern Mongolia. The Mongolian subspecies (S. t. mongolica) occurs only in western Mongolia. Taxonomy and phylogeny The scientific name Capra tatarica was coined by Carl Linnaeus in 1766 in the 12th edition of Systema Naturae. It was reclassified as Saiga tatarica and is the sole living member of the genus Saiga. Two subspecies are recognised: S. t. tatarica (Linnaeus, 1766): also known as the Russian saiga, it is only to be found today in central Asia. S. t. mongolica Bannikov, 1946: also known as the Mongolian saiga, it is sometimes treated as an independent species, or as subspecies of the Pleistocene Saiga borealis; it is confined to Mongolia. In 1945, American paleontologist George Gaylord Simpson classified both in the tribe Saigini under the same subfamily, Caprinae. Subsequent authors were not certain about the relationship between the two, until phylogenetic studies in the 1990s revealed that though morphologically similar, the Tibetan antelope is closer to the Caprinae while the saiga is closer to the Antilopinae. In a revision of the phylogeny of the tribe Antilopini on the basis of nuclear and mitochondrial data in 2013, Eva Verena Bärmann (of the University of Cambridge) and colleagues showed that the saiga is sister to the clade formed by the springbok (Antidorcas marsupialis) and the gerenuk (Litocranius walleri). The study noted that the saiga and the springbok could be considerably different from the rest of the antilopines; a 2007 phylogenetic study suggested that the two form a clade sister to the gerenuk. The cladogram below is based on the 2013 study. Evolution Fossils of saiga, concentrated mainly in central and northern Eurasia, date to as early as the late Pleistocene (nearly 0.1 Mya). Several species of extinct Saiga from the Pleistocene of Eurasia and Alaska have been named, including S. borealis, S. prisca, S. binagadensis and S. ricei, although more recent studies suggest that these prehistoric representatives were merely geographical variants of the extant species that was formerly much more widespread. Fossils excavated from the Buran Kaya III site (Crimea) date back to the transition from Pleistocene to Holocene. The morphology of saiga does not seem to have changed significantly since prehistoric times. Before the Holocene, the saiga ranged across the mammoth steppe from as far west as modern-day England and France to as far east as northern Siberia, Alaska, and probably Canada. The antelope gradually entered the Urals, though it did not colonise southern Europe. A 2010 study revealed that a steep decline has occurred in the genetic variability of the saiga since the late Pleistocene-Holocene, probably due to a population bottleneck. Characteristics The saiga stands at the shoulder, and weighs . The head-and-body length is typically between . A prominent feature of the saiga is the pair of closely spaced, bloated nostrils directed downward. Other facial features include the dark markings on the cheeks and the nose, and the long ears. The coat shows seasonal changes. In summer, the coat appears yellow to red, fading toward the flanks. The Mongolian saiga can develop a sandy colour. The coat develops a pale, grayish-brown colour in winter, with a hint of brown on the belly and the neck. The ventral parts are generally white. The hairs, that measure long in summer, can grow as long as in winter. This forms a long mane on the neck. Two distinct moults can be observed in a year, one in spring from April to May and another in autumn from late September or early October to late November or early December. The tail measures . Only males possess horns. These horns, thick and slightly translucent, are wax-coloured and show 12 to 20 pronounced rings. With a base diameter of , the horns of the Russian saiga measure in length; the horns of the Mongolian saiga, however, reach a maximum length of . Ecology and behaviour Saigas form very large herds that graze in semideserts, steppes, grasslands, and possibly open woodlands, eating several species of plants, including some that are poisonous to other animals. They can cover long distances and swim across rivers, but they avoid steep or rugged areas. The mating season starts in November, when stags fight for the acceptance of females. The winner leads a herd of five to ten females (occasionally up to 50). In springtime, mothers come together in mass to give birth. Two-thirds of births are twins; the remaining third of births are single calves. Saigas, like the Mongolian gazelles, are known for their extensive migrations across the steppes that allow them to escape natural calamities. Saigas are highly vulnerable to wolves. Juveniles are targeted by foxes, steppe eagles, golden eagles, and ravens. Distribution and habitat In the mid-2010s, the populations declined enormously – as much as 95% in 15 years. This led the saiga to be classified as critically endangered on the IUCN Red List. In more recent years, the saiga has experienced massive regrowth. As of 2022, there is an estimated number of 1.38 million saiga surviving in Kazakhstan, per an April aerial count. As of December 2023, the global saiga antelope population is estimated to number 922,600–988,500 mature individuals. In May 2010, an estimated 12,000 of the 26,000 saiga population in the Ural region of Kazakhstan were found dead. Although the deaths are currently being ascribed to pasteurellosis, an infectious disease that strikes the lungs and intestines, the underlying trigger remains to be identified. In May 2015, what may be the same disease broke out in three northern regions of the country. As of 28 May 2015, more than 120,000 saigas have been confirmed dead in the Betpak-Dala population in central Kazakhstan, representing more than a third of the global population. By April 2016, the saigas appear to be making a comeback, with an increase of population from 31,000 to 36,000 in the Betpak-Dala area. In April 2021 a survey in Kazakhstan found that the saiga population had risen from an estimated 334,000 to 842,000. The population increase was partially attributed to the government crackdown on poaching and the establishment of conservation areas. UK charity RSPB reported in 2022 that, partly due to their conservation efforts, as well as the designation of the Bokey Orda-Ashiozek protected area by the Kazakhstan government, the population had now risen to a peak of 1.32 million. Former range The saiga was not present in Europe during the Eemian. During the last glacial period, it ranged from the British Isles through Central Asia and the Bering Strait into Alaska and Canada's Yukon and Northwest Territories. By the classical age, they were apparently considered a characteristic animal of Scythia, judging from the historian Strabo's description of an animal called the kolos that was "between the deer and ram in size" and was wrongly believed to drink through its nose. Considerable evidence shows the importance of the antelope to Andronovo culture settlements. Illustrations of saiga antelopes can be found among the cave paintings that were dated back to seventh to fifth century BC. Moreover, saiga bones were found among the remains of other wild animals near the human settlements. The fragmented information shows an abundance of saigas on the territory of modern Kazakhstan in the 14th-16th centuries. The migratory routes ranged throughout the country's area, especially the region between the Volga and Ural Rivers was heavily populated. The population's size remained high until the second half of the 19th century, when excessive horn export began. The high price and demand for horns drove radical hunting. The number of animals decreased in all regions and the migratory routes shifted southward. Populations in Ukraine were driven to extirpation in the 18th century. After a rapid decline, they were nearly completely exterminated in the 1920s, but they were able to recover. By 1950, two million of them were found in the steppes of the USSR. Their population fell drastically following the collapse of the USSR due to uncontrolled hunting and demand for horns in Chinese medicine. At one point, some conservation groups, such as the World Wildlife Fund, encouraged the hunting of this species, as its horn was presented as an alternative to that of a rhinoceros. Mongolian saiga The Mongolian saiga (S. t. mongolica) is found in a small area in western Mongolia around the Sharga and Mankhan Nature Reserves. Threats The horn of the saiga antelope is used in traditional Chinese medicine and can sell for as much as US$150. Demand for the horns drives poaching and smuggling, which has wiped out the population in China, where the saiga antelope is a class I protected species. In June 2014, Chinese customs at the Kazakh border uncovered 66 cases containing 2,351 saiga antelope horns, estimated to be worth over Y70.5 million (US$11 million). In June 2015, E. J. Milner-Gulland (chair of Saiga Conservation Alliance) said: "Antipoaching needs to be a top priority for the Russian and Kazakh governments." Hunting Saigas have been a target of hunting since prehistoric ages, when hunting was an essential means to acquire food. Saigas' horns, meat, and skin have commercial value and are exported from Kazakhstan. Saiga horn, known as , is one of the main ingredients in traditional Chinese medicine that is used as an extract or powder additive to the elixirs, ointments, and drinks. Saiga horn's value is equal to rhinoceros horn, whose trade was banned in 1993. is thought to be a cheaper substitute of rare rhino horn in most TCM recipes. In the period from 1955 to 1989, over 87 thousand tonnes of meat were collected in Kazakhstan by killing more than five million saiga. In 2011, Kazakhstan reaffirmed a ban on hunting saiga and extended this ban until 2021. Saiga meat is compared to lamb, considered to be nutritious and delicious. Numerous recipes for cooking the antelope's meat can be found. Both meat and byproducts are sold in the country and outside of it. About 45–80 dm2 of skin can be harvested from one individual depending on its age and sex. Physical barriers Agricultural advancement and human settlements have been shrinking habitat areas of the saigas since the 20th century. Occupants limited saiga's passage to water resources and the winter and summer habitats. The ever-changing face of steppe requires saigas to search for new routes to their habitual lands. Currently, saiga populations' migratory routes pass five countries and different human-made constructions, such as railways, trenches, mining sites, and pipelines. These physical barriers limit movement of the antelopes. Cases of saiga herds being trapped within fenced areas and starving to death have been reported. Climatic variability Saigas are dependent on weather and affected by climate fluctuations to a great extent due to their migratory nature. Harsh winters with strong winds or high snow coverage prevent them from feeding on the underlying grass. Population size usually dramatically decreases after severe cold months. Recent trends in climate change have increased the aridity of the steppe region, leading an estimated 14% or more of available pastureland to be considered degraded and useless. Concurrently, small steppe rivers dry faster, limiting water resources to large lakes and rivers, which are usually populated by human settlements; high temperatures in the steppe region lead to springtime floods, in which saiga calves can drown. Mass epizootic mortality 1980 to 2015 events For ungulates, mass mortalities are not uncommon. In the 1980s, several saiga die-offs occurred, and between 2010 and 2014, one occurred every year. The deaths could be linked to calving aggregation, which is when they are most vulnerable. More recent research involving a mass die-off in 2015 indicates warmer weather and attendant humidity led bacteria common in saiga antelopes to move into the bloodstream and cause hemorrhagic septicemia. 2015–2016 epizootic In May 2015, uncommonly large numbers of saigas began to die from a mysterious epizootic illness suspected to be pasteurellosis. Herd fatality is 100% once infected, with an estimated 40% of the species' total population already dead. More than 120,000 carcasses had been found by late May 2015, while the estimated total population was only 250,000. Biologist Murat Nurushev suggested that the cause might be acute ruminal tympany, whose symptoms (bloating, mouth foaming, and diarrhea) had been observed in dead saiga antelopes. According to Nurushev, this disease occurred as a result of foraging on a large amount of easily fermenting plants (alfalfa, clover, sainfoins, and mixed wet, green grass). In May 2015, the United Nations agency which is involved in saiga conservation efforts issued a statement that the mass die-off had ended. By June 2015, no definitive cause for the epizootic had been found. At a scientific meeting in November 2015 in Tashkent, Uzbekistan, Dr. Richard A. Kock (of the Royal Veterinary College in London) reported that his colleagues and he had narrowed down the possible culprits. Climate change and stormy spring weather, they said, may have transformed harmless bacteria, carried by the saigas, into lethal pathogens. Pasteurella multocida, a bacterium, was determined to be the cause of death. The bacterium occurs in the antelopes and is normally harmless; the reason for the change in behavior of the bacterium is unknown. Now, scientists and researchers believe the unusually warm and wet uncontrolled environmental variables caused the bacterium to enter the bloodstream and become septic. Hemorrhagic septicemia is the likely cause of the most recent deaths The change of the bacteria may be attributed to "the response of opportunistic microbes to changing environmental conditions". The Betpak-Dala saiga population in central Kazakhstan, which saw the most deaths, increased from 31,000 after the epidemic to 36,000 by April 2016. In late 2016, a large loss of the population happened in Mongolia. The etiology was confirmed to be goat plague in early 2017. Conservation Under the auspices of the Convention on the Conservation of Migratory Species of Wild Animals, the Saiga Antelope Memorandum of Understanding was concluded and came into effect on 24 September 2006. In captivity Currently, only the Almaty Zoo and Askania-Nova keep saigas.
Biology and health sciences
Bovidae
Animals
357757
https://en.wikipedia.org/wiki/Urticaceae
Urticaceae
The Urticaceae are a family, the nettle family, of flowering plants. The family name comes from the genus Urtica. The Urticaceae include a number of well-known and useful plants, including nettles in the genus Urtica, ramie (Boehmeria nivea), māmaki (Pipturus albidus), and ajlai (Debregeasia saeneb). The family includes about 2,625 species, grouped into 53 genera according to the database of the Royal Botanic Gardens, Kew and Christenhusz and Byng (2016). The largest genera are Pilea (500 to 715 species), Elatostema (300 species), Urtica (80 species), and Cecropia (75 species). Cecropia contains many myrmecophytes. Urticaceae species can be found worldwide, apart from the polar regions. Description Urticaceae species can be shrubs (e.g. Pilea), lianas, herbs (e.g. Urtica, Parietaria), or, rarely, trees (Dendrocnide, Cecropia). Their leaves are usually entire and bear stipules. Urticating (stinging) hairs are often present. They have usually unisexual flowers and can be both monoecious or dioecious. They are wind-pollinated. Most disperse their pollen when the stamens are mature and their filaments straighten explosively, a peculiar and conspicuously specialised mechanism. While the stings delivered by Urticaceae species are often unpleasant, they seldom pose any direct threat to health, and deaths directly attributed to stinging are exceedingly rare; species known to cause human fatalities include Dendrocnide cordata and Urtica ferox. Taxonomy The APG II system puts the Urticaceae in the order Rosales, while older systems consider them part of the Urticales, along with Ulmaceae, Moraceae, and Cannabaceae. APG still considers "old" Urticales a monophyletic group, but does not recognise it as an order on its own. Fossil record The fossil record of Urticaceae is scattered and mostly based on dispersed fruits. Twelve species based on fossil achenes are known from the Late Cretaceous of Central Europe. Most were assigned to the extant genera Boehmeria (three species), Debregeasia (one species) and Pouzolzia (three species), while three species were assigned to the extinct genus Urticoidea. A Colombian fossil flora of the Maastrichtian stage has yielded leaves that resemble leaves of the tribe Ceropieae. In the Cenozoic fossil leaves from the Ypresian Allenby Formation preserve distinct trichomes, and have been attributed to the Tribe Urticeae in the fossil record. The leaves had originally been identified as Rubus by earlier workers on the Eocene Okanagan Highlands, but Devore et al (2020) interpreted the preserved hairs along the stem and major veins as stinging trichomes, rather than simple hairs or thorns. Phylogeny Modern molecular phylogenetics suggest the following relationships (see also ): Tribes and genera Boehmerieae Gaudich. 1830 Archiboehmeria C.J. Chen 1980 (1 sp.) Astrothalamus C.B. Rob. 1911 (1 sp.) Boehmeria Jacq. 1760 (80 spp.) Chamabainia Wight 1853 (1–2 spp.) Cypholophus Wedd. 1854 (15 spp.) Debregeasia Gaudich. 1844 (4 spp.) Gibbsia Rendle 1917 (2 spp.) Gonostegia Turcz. 1846 (5 spp.) Hemistylus Benth. 1843 (4 spp.) Neodistemon Babu & A. N. Henry 1970 (1 sp.) Neraudia Gaudich. 1830 (5 spp.) Nothocnide Blume 1856 (4 spp.) Oreocnide Miq. 1851 (15 spp.) Phenax Wedd. 1854 (12 spp.) Pipturus Wedd. 1854 (30 spp.) Pouzolzia Gaudich. 1826 [1830] (70 spp.) Rousselia Gaudich. 1826 [1830] (3 spp.) Sarcochlamys Gaudich. 1844 (1 sp.) Cecropieae Gaudich. 1830 Cecropia Loefl. 1758 (70–80 spp.) Coussapoa Aubl. 1775 (>50 spp.) Leucosyke Zoll. & Moritzi 1845 (35 spp.) Maoutia Wedd. 1854 (15 spp.) Musanga R. Br. in Tuckey 1818 (2 spp.) Myrianthus P. Beauv. 1804 [1805] (7 spp.) Pourouma Aubl. 1775 (>50 spp.) Elatostemateae Gaudich. 1830 Aboriella Bennet (1 sp.) (synonym of Achudemia Achudemia Blume 1856 Elatostema J.R. Forst. & G. Forst. 1775 (300 spp.) Gyrotaenia Griseb. 1861 (4 spp.) Lecanthus Wedd. 1854 (4 sp.) (syn. Meniscogyne Gagnep. 1928) Myriocarpa Benth. 1844 [1846] (18 spp.) Pellionia Gaudich. 1826 (60 spp.) Petelotiella Gagnep. in Lecomte 1929 (1 spp.) Pilea Lindl. 1821 (606 spp.) (syn. Sarcopilea Urb. 1912) Procris Comm. ex Juss. 1789 (24 spp.) Forsskaoleeae Gaudich. 1830 Australina Gaudich. 1830 (2 spp.) Didymodoxa E. Mey. ex Wedd. 1857 (2 spp.) Droguetia Gaudich. 1830 (7 spp.) Forsskaolea L. 1764 (6 spp.) Parietarieae Gaudich. 1830 Gesnouinia Gaudich. 1830 (2 spp.) Parietaria L. 1753 (20 spp.) Soleirolia Gaudich. 1830 (1 sp.) Urticeae Lamarck & DC. 1806 Dendrocnide Miq. 1851 (27 spp.) Discocnide Chew 1965 (1 sp.) Girardinia Gaudich. 1830 (2 spp.) Hesperocnide Torr. 1857 (2 spp.) Laportea Gaudich. 1826 [1830] (21 spp.) Nanocnide Blume 1856 (2 spp.) Obetia Gaudich. 1844 (7 spp.) Poikilospermum Zipp. ex Miq. 1864 (20 spp.) Touchardia Gaudich. 1847 (1–2 spp.) Urera Gaudich. 1826 [1830] (35 spp.) Urtica L. 1753—nettle (80 spp.) Zhengyia T.Deng, D.G.Zhang & H.Sun 2013 (1 sp.) Incertae sedis Capsulea (1 sp.) Elatostematoides (25 sp.) Metapilea (1 sp.) Metatrophis F.Br. 1935 (1 sp.) Parsana Parsa & Maleki 1952 (1 sp.) Scepocarpus (14 sp.) Diseases The Urticaceae are subject to many bacterial, viral, fungal, and nematode parasitic diseases. Among them are: Bacterial leaf spot, caused by Xanthomonas campestris which affects Pellionia, Pilea, and other genera Anthracnose, a fungal disease caused by Colletotrichum capsici which affects Pilea Myrothecium leaf spot, a fungal disease caused by Myrothecium roridum which affects plants throughout the Urticaceae, as well as other angiosperms Phytophthora blight, a water mold disease caused by Phytophthora nicotianae which affects Pilea Southern blight, a fungal disease caused by Athelia rolfsii which affects both Pellionia and Pilea Image gallery
Biology and health sciences
Rosales
Plants
357978
https://en.wikipedia.org/wiki/Harpy%20eagle
Harpy eagle
The harpy eagle (Harpia harpyja) is a large neotropical species of eagle. It is also called the American harpy eagle to distinguish it from the Papuan eagle, which is sometimes known as the New Guinea harpy eagle or Papuan harpy eagle. It is the largest bird of prey throughout its range, and among the largest extant species of eagles in the world. It usually inhabits tropical lowland rainforests in the upper (emergent) canopy layer. Destruction of its natural habitat has caused it to vanish from many parts of its former range, and it is nearly extirpated from much of Central America. The genus Harpia, together with Harpyopsis, Macheiramphus and Morphnus, form the subfamily Harpiinae. Taxonomy The harpy eagle was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae as Vultur harpyja, after the mythological beast harpy. It is now the only species placed in the genus Harpia that was introduced in 1816 by the French ornithologist Louis Pierre Vieillot. The harpy eagle is most closely related to the crested eagle (Morphnus guianensis), the Papuan eagle (Harpyopsis novaeguineae) and the bat hawk (Macheiramphus alcinus), the four composing the subfamily Harpiinae within the large family Accipitridae. Previously thought to be closely related, the Philippine eagle has been shown by DNA analysis to belong elsewhere in the raptor family, as it is related to the Circaetinae. The specific name harpyja and the word "harpy" in the common name both come from Ancient Greek harpyia (). They refer to the harpies of Ancient Greek mythology. These were wind spirits who flew the dead to Hades or Tartarus, purported to have the lower body and talons of a raptor and the head of a woman, standing anywhere from the height of a tall child to as high as a grown man; some depictions have the creatures possessing an eagle-like body with the exposed breasts of an elderly female human, a giant wingspan and the head of a grotesque, sharp-toothed, mutant eagle—something more akin to a goblin with wings. Description The upperside of the harpy eagle is covered with slate-black feathers, and the underside is mostly white, except for the feathered tarsi, which are striped black. A broad black band across the upper breast separates the gray head from the white belly. The head is pale grey, and is crowned with a double crest. The upperside of the tail is black with three gray bands, while the underside of it is black with three white bands. The irises are gray or brown or red, the cere and bill are black or blackish and the tarsi and toes are yellow. The plumage of males and females is identical. The tarsus is up to long. Female harpy eagles typically weigh . One source states that adult females can weigh up to . An exceptionally large captive female, "Jezebel", weighed . Being captive, however, this large female may not be representative of the weight possible in wild harpy eagles due to differences in the food availability. The male, in comparison, is much smaller and may range in weight from . The average weight of adult males has been reported as against an average of for adult females, a 35% or higher difference in mean body mass. Harpy eagles may measure from in total length and have a wingspan of . Among the standard measurements, the wing chord measures , the tail measures , the tarsus is long, and the exposed culmen from the cere (the beak) is . Mean talon size is in males, and in females. It is sometimes cited as the largest eagle alongside the Philippine eagle, which is somewhat longer on average (between sexes averaging ) but weighs slightly less, and the Steller's sea eagle, which is perhaps slightly heavier on average (mean of three unsexed birds was ). The harpy eagle may be the largest bird species to reside in Central America, though large water birds such as American white pelicans (Pelecanus erythrorhynchos) and jabirus (Jabiru mycteria) have scarcely lower mean body masses. The wingspan of the harpy eagle is relatively small, though the wings are quite broad, an adaptation that increases maneuverability in forested habitats and is shared by other raptors in similar habitats. The wingspan of the harpy eagle is surpassed by several large eagles that live in more open habitats, such as those in the Haliaeetus and Aquila genera. The extinct Haast's eagle was significantly larger than all extant eagles, including the harpy. This species is largely silent away from the nest. There, the adults give a penetrating, weak, melancholy scream, with the incubating males' call described as "whispy screaming or wailing". The females' calls while incubating are similar, but are lower-pitched. While approaching the nest with food, the male calls out "rapid chirps, goose-like calls, and occasional sharp screams". Vocalization in both parents decreases as the nestlings age, while the nestlings become more vocal. The nestlings call chi-chi-chi...chi-chi-chi-chi, seemingly in alarm in response to rain or direct sunlight. When humans approach the nest, the nestlings have been described as uttering croaks, quacks, and whistles. Distribution and habitat Relatively rare and elusive throughout its range, the harpy eagle is found from southern Mexico (incl. Chiapas, Oaxaca and the Yucatán states) and south through Central America, into South America to as far south as Argentina. They can still be seen by tourists and locals in Costa Rica and Panama. As their preferred habitat is rainforest, they nest and hunt predominantly in the emergent layer. The eagle is most common in Brazil, where it is found across the entire country. With the exception of some areas of the aforementioned Panama and Costa Rica, the species is nearly extinct in Central America, likely due to the logging industry’s decimation of much of the Meso-American rainforests. Their habitat is expected to decline further due to climate change. The harpy eagle prefers tropical, lowland rainforests and may also choose to nest within such areas from the canopy to the emergent vegetation. They typically occur below an elevation of , but have been recorded at elevations up to . Within the forests, they hunt in the canopy or, rarely, on the ground, and perch on emergent trees to scout for prey. They do not generally occur in disturbed areas, avoiding humans whenever possible, but regularly visit semi-open forest and pasture mosaic, in hunting forays. Harpies, however, can be found flying over forest borders in a variety of habitats, such as cerrados, caatingas, buriti palm stands, cultivated fields, and cities. They have recently been found in areas where high-grade forestry is practiced. Behavior Feeding Full grown harpy eagles are at the top of a food chain. They possess the largest talons of any living eagle and have been recorded as carrying prey weighing up to roughly half of their own body weight. This allows them to snatch from tree branches a live sloth and other large prey items. Most commonly, harpy eagles use perch hunting, in which they scan for prey activity while briefly perched between short flights from tree to tree. Upon spotting prey, the eagle quickly dives and grabs it. Sometimes, harpy eagles are "sit-and-wait" predators (common in forest-dwelling raptors), perching for long periods on a high point near an opening, a river, or a salt lick, where many mammals go to attain nutrients. On occasion, they may also hunt by flying within or above the canopy. They have also been observed tail-chasing: pursuing another bird in flight, rapidly dodging among trees and branches, a predation style common to hawks (genus Accipiter) that hunt birds. A recent literature review and research using camera traps list a total of 116 prey species. Its main prey are tree-dwelling mammals, and a majority of the diet has been shown to focus on sloths. Research conducted by Aguiar-Silva between 2003 and 2005 in a nesting site in Parintins, Amazonas, Brazil, collected remains from prey offered to the nestling by its parents. The researchers found that 79% of the harpy's prey was accounted for by sloths from two species: 39% brown-throated sloth (Bradypus variegatus), and 40% Linnaeus's two-toed sloth (Choloepus didactylus). Similar research in Panama, where two captive-bred subadults were released, found that 52% of the male's captures and 54% of the female's were of two sloth species (brown-throated sloth and Hoffmann's two-toed sloth (Choloepus hoffmanni). Harpy eagles are capable of hunting all size of sloths, including full-grown adult two-toed sloths weighing up to . Another major prey of harpy eagles is monkeys. At several nests in Guyana, monkeys made up about 37% of the prey remains found at the nests. Similarly, cebid monkeys made up 35% of the remains found at 10 nests in Amazonian Ecuador. Monkeys regularly taken include capuchin monkeys, saki monkeys, howler monkeys, titi monkeys, squirrel monkeys, and spider monkeys. Smaller monkeys, such as tamarins and marmosets, are, however, seemingly ignored as prey by this species. Small monkeys typically weighing between , such as Wedge-capped capuchin (Cebus olivaceus), tufted capuchin (Sapajus apella), and white-faced saki (Pithecia pithecia) are the most frequently taken. Larger howler monkeys are also taken, mainly Colombian red howler (Alouatta seniculus), but also Guyanan red howler (Alouatta macconnelli) and mantled howler (Alouatta palliata). These monkeys typically weigh between and female harpy eagles can prey on all ages and sexes, while male harpy eagles tend to focus on juveniles. In one study, breeding harpy eagles hunted Yucatán black howler (Alouatta pigra), the largest howler monkey which can weigh between , although the ages of the monkeys taken by these eagles are unknown. Nevertheless, adults of other large monkeys can be taken by female harpy eagles, including woolly monkey (Lagothrix cana) and Peruvian spider monkey (Ateles chamek), and red-faced spider monkey (Ateles paniscus) which can weigh around and possibly exceeding in large males. Other partially arboreal and even land mammals are also preyed on given the opportunity. In the Pantanal, a pair of nesting eagles preyed largely on the porcupine (Coendou prehensilis) and the agouti (Dasyprocta azarae). Both species of tamanduas (Tamandua mexicana & T. tetradactyla) are taken and armadillos, especially nine-banded armadillo (Dasypus novemcinctus) are also taken, as well as carnivores such as kinkajous (Potos flavus), coatis (Nasua nasua & N. narcia), tayras (Eira barbara), and occasionally margays (Leopardus wiedii) and crab-eating foxes (Cerdocyon thous). In one instant, an adult greater grison (Galictis vittata) was killed and partly consumed by subadult female harpy eagle. Those carnivoran prey species usually weigh around , but there is a report that harpy eagles prey on possibly larger carnivores such as ocelot (Leopardus pardalis) and adult crab-eating raccoon respectively. Other mammals, such as young peccaries, deer fawns, squirrels and opossums are additionally taken. The eagle may also attack bird species such as macaws: At the Parintins research site, the red-and-green macaw (Ara chloropterus) made up for 0.4% of the prey base, with other birds amounting to 4.6%. Other parrots have also been preyed on, as well as cracids such as curassows and other birds like seriemas. In one occasion, dependent juvenile male eagle quickly learned how to hunt black vultures (Coragyps atratus) and accounted for 9 of our 10 records of harpy predation on vultures. Additional prey items reported include reptiles such as iguanas, tegus, snakes, and amphisbaenids. In Suriname, green iguanas (Iguana iguana) can be important prey source, and predation on yellow-footed tortoise (Chelonoidis denticulata) have been recorded twice. The eagle has been recorded as taking domestic livestock, including chickens, lambs, goats, and young pigs, but this is extremely rare under normal circumstances. They control the population of mesopredators such as capuchin monkeys, which prey extensively on bird's eggs and which (if not naturally controlled) may cause local extinctions of sensitive species. Males usually take relatively smaller prey, with a typical range of or about half their own weight. The larger females take larger prey, with a minimum recorded prey weight of around . Adult female harpies regularly grab large male howler or spider monkeys or mature sloths weighing in flight and fly off without landing, an enormous feat of strength. Prey items taken to the nest by the parents are normally medium-sized, having been recorded from . The prey brought to the nest by males averaged , while the prey brought to the nest by females averaged . In another study, floaters (i.e. birds not engaging in breeding at that time) were found to take larger prey, averaging , than those that were nesting, for which prey averaged , with prey species estimated to weigh a mean of (for common opossum) to (for adult crab-eating raccoon). Overall, harpy eagle prey weigh between , with the mean prey size equalling Breeding In ideal habitats, nests would be fairly close together. In some parts of Panama and Guyana, active nests were located away from one another, while they are within of each other in Venezuela. In Peru, the average distance between nests was and the average area occupied by each breeding pairs was estimated at . In less ideal areas, with fragmented forest, breeding territories were estimated at . The female harpy eagle lays two white eggs in a large stick nest, which commonly measures deep and across and may be used over several years. Nests are located high up in a tree, usually in the main fork, at , depending on the stature of the local trees. The harpy often builds its nest in the crown of the kapok tree, one of the tallest trees in South America. In many South American cultures, cutting down the kapok tree is considered bad luck, which may help safeguard the habitat of this stately eagle. The bird also uses other huge trees on which to build its nest, such as the Brazil nut tree. A nesting site found in the Brazilian Pantanal was built on a cambará tree (Vochysia divergens). No display is known between pairs of eagles, and they are believed to mate for life. A pair of harpy eagles usually only raises one chick every 2–3 years. After the first chick hatches, the second egg is ignored and normally fails to hatch unless the first egg perishes. The egg is incubated around 56 days. When the chick is 36 days old, it can stand and walk awkwardly. The chick fledges at the age of 6 months, but the parents continue to feed it for another 6 to 10 months. The male captures much of the food for the incubating female and later the eaglet, but also takes an incubating shift while the female forages and also brings prey back to the nest. Breeding maturity is not reached until birds are 4 to 6 years of age. Adults can be aggressive toward humans who disturb the nesting site or appear to be a threat to their young. Status and conservation Although the harpy eagle still occurs over a considerable range, its distribution and populations have dwindled considerably. It is threatened primarily by habitat loss due to the expansion of logging, cattle ranching, agriculture, and prospecting. Secondarily, it is threatened by being hunted as an actual threat to livestock and/or a supposed one to human life, due to its great size. Although not actually known to prey on humans and only rarely on domestic stock, the species' large size and nearly fearless behaviour around humans reportedly make it an "irresistible target" for hunters. Such threats apply throughout its range, in large parts of which the bird has become a transient sight only; in Brazil, it was all but wiped out from the Atlantic rainforest and is only found in appreciable numbers in the most remote parts of the Amazon basin; a Brazilian journalistic account of the mid-1990s already complained that at the time it was only found in significant numbers in Brazilian territory on the northern side of the Equator. Scientific 1990s records, however, suggest that the harpy Atlantic Forest population may be migratory. Subsequent research in Brazil has established that, as of 2009, the harpy eagle, outside the Brazilian Amazon, is critically endangered in Espírito Santo, São Paulo and Paraná, endangered in Rio de Janeiro, and probably extirpated in Rio Grande do Sul (where a recent (March 2015) record was set for the Parque Estadual do Turvo) and Minas Gerais – the actual size of their total population in Brazil is unknown. Globally, the harpy eagle is considered vulnerable by IUCN and threatened with extinction by CITES (appendix I). The Peregrine Fund until recently considered it a "conservation-dependent species", meaning it depends on a dedicated effort for captive breeding and release to the wild, as well as habitat protection, to prevent it from reaching endangered status, but now has accepted the near threatened status. The harpy eagle is considered critically endangered in Mexico and Central America, where it has been extirpated in most of its former range; in Mexico, it used to be found as far north as Veracruz, but today probably occurs only in Chiapas in the Selva Zoque. It is considered as near threatened or vulnerable in most of the South American portion of its range; at the southern extreme of its range, in Argentina, it is found only in the Parana Valley forests at the province of Misiones. It has disappeared from El Salvador, and almost so from Costa Rica. National initiatives Various initiatives for restoration of the species are in place in various countries. Since 2002, the Peregrine Fund initiated a conservation and research program for the harpy eagle in the Darién Province. A similar—and grander, given the dimensions of the countries involved—research project is occurring in Brazil, at the National Institute of Amazonian Research, through which 45 known nesting locations (updated to 62, only three outside the Amazonian basin and all three inactive) are being monitored by researchers and volunteers from local communities. A harpy eagle chick has been fitted with a radio transmitter that allows it to be tracked for more than three years via a satellite signal sent to the Brazilian National Institute for Space Research. Also, a photographic recording of a nest site in the Carajás National Forest was made for the Brazilian edition of National Geographic Magazine. In Panama, the Peregrine Fund carried out a captive-breeding and release project that released a total of 49 birds in Panama and Belize. The Peregrine Fund has also carried out a research and conservation project on this species since the year 2000, making it the longest-running study on harpy eagles. In Belize, the Belize Harpy Eagle Restoration Project began in 2003 with the collaboration of Sharon Matola, founder and director of the Belize Zoo and the Peregrine Fund. The goal of this project was the re-establishment of the harpy eagle within Belize. The population of the eagle declined as a result of forest fragmentation, shooting, and nest destruction, resulting in near extirpation of the species. Captive-bred harpy eagles were released in the Rio Bravo Conservation and Management Area in Belize, chosen for its quality forest habitat and linkages with Guatemala and Mexico. Habitat linkage with Guatemala and Mexico were important for conservation of quality habitat and the harpy eagle on a regional level. As of November 2009, 14 harpy eagles have been released and are monitored by the Peregrine Fund, through satellite telemetry. In January 2009, a chick from the all-but-extirpated population in the Brazilian state of Paraná was hatched in captivity at the preserve kept in the vicinity of the Itaipu Dam by the Brazilian/Paraguayan state-owned company Itaipu Binacional. In September 2009, an adult female, after being kept captive for 12 years in a private reservation, was fitted with a radio transmitter before being restored to the wild in the vicinity of the Pau Brasil National Park (formerly Monte Pascoal NP), in the state of Bahia. In December 2009, a 15th harpy eagle was released into the Rio Bravo Conservation and Management Area in Belize. The release was set to tie in with the United Nations Climate Change Conference 2009, in Copenhagen. The 15th eagle, nicknamed "Hope" by the Peregrine officials in Panama, was the "poster child" for forest conservation in Belize, a developing country, and the importance of these activities in relation to climate change. The event received coverage from Belize's major media entities, and was supported and attended by the U.S. Ambassador to Belize, Vinai Thummalapally, and British High Commissioner to Belize, Pat Ashworth. In Colombia, as of 2007, an adult male and a subadult female confiscated from wildlife trafficking were restored to the wild and monitored in Paramillo National Park in Córdoba, and another couple was being kept in captivity at a research center for breeding and eventual release. A monitoring effort with the help of volunteers from local Native American communities is also being made in Ecuador, including the joint sponsorship of various Spanish universities—this effort being similar to another one going on since 1996 in Peru, centred around a native community in the Tambopata Province, Madre de Dios Region. Another monitoring project, begun in 1992, was operating as of 2005 in the state of Bolívar, Venezuela. In human culture The harpy eagle is the national bird of Panama and is depicted on the coat of arms of Panama. The 15th harpy eagle released in Belize, named "Hope", was dubbed "Ambassador for Climate Change", in light of the United Nations Climate Change Conference 2009. The bird appeared on the reverse side of the Venezuelan Bs.F 2,000 note. The harpy eagle was the inspiration behind the design of Fawkes the Phoenix in the Harry Potter film series. A live harpy eagle was used to portray the now-extinct Haast's eagle in BBC's Monsters We Met. Indigenous cultures In Aztec religion the harpy eagle was sacred to Quetzalcoatl.
Biology and health sciences
Accipitrimorphae
Animals
358277
https://en.wikipedia.org/wiki/Cayley%20graph
Cayley graph
In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group, is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem (named after Arthur Cayley), and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing expander graphs. Definition Let be a group and be a generating set of . The Cayley graph is an edge-colored directed graph constructed as follows: Each element of is assigned a vertex: the vertex set of is identified with Each element of is assigned a color . For every and , there is a directed edge of color from the vertex corresponding to to the one corresponding to . Not every convention requires that generate the group. If is not a generating set for , then is disconnected and each connected component represents a coset of the subgroup generated by . If an element of is its own inverse, then it is typically represented by an undirected edge. The set is often assumed to be finite, especially in geometric group theory, which corresponds to being locally finite and being finitely generated. The set is sometimes assumed to be symmetric () and not containing the group identity element. In this case, the uncolored Cayley graph can be represented as a simple undirected graph. Examples Suppose that is the infinite cyclic group and the set consists of the standard generator 1 and its inverse (−1 in the additive notation); then the Cayley graph is an infinite path. Similarly, if is the finite cyclic group of order and the set consists of two elements, the standard generator of and its inverse, then the Cayley graph is the cycle . More generally, the Cayley graphs of finite cyclic groups are exactly the circulant graphs. The Cayley graph of the direct product of groups (with the cartesian product of generating sets as a generating set) is the cartesian product of the corresponding Cayley graphs. Thus the Cayley graph of the abelian group with the set of generators consisting of four elements is the infinite grid on the plane , while for the direct product with similar generators the Cayley graph is the finite grid on a torus. A Cayley graph of the dihedral group on two generators and is depicted to the left. Red arrows represent composition with . Since is self-inverse, the blue lines, which represent composition with , are undirected. Therefore the graph is mixed: it has eight vertices, eight arrows, and four edges. The Cayley table of the group can be derived from the group presentation A different Cayley graph of is shown on the right. is still the horizontal reflection and is represented by blue lines, and is a diagonal reflection and is represented by pink lines. As both reflections are self-inverse the Cayley graph on the right is completely undirected. This graph corresponds to the presentation The Cayley graph of the free group on two generators and corresponding to the set is depicted at the top of the article, with being the identity. Travelling along an edge to the right represents right multiplication by while travelling along an edge upward corresponds to the multiplication by Since the free group has no relations, the Cayley graph has no cycles: it is the 4-regular infinite tree. It is a key ingredient in the proof of the Banach–Tarski paradox. More generally, the Bethe lattice or Cayley tree is the Cayley graph of the free group on generators. A presentation of a group by generators corresponds to a surjective homomorphism from the free group on generators to the group defining a map from the Cayley tree to the Cayley graph of . Interpreting graphs topologically as one-dimensional simplicial complexes, the simply connected infinite tree is the universal cover of the Cayley graph; and the kernel of the mapping is the fundamental group of the Cayley graph. A Cayley graph of the discrete Heisenberg group is depicted to the right. The generators used in the picture are the three matrices given by the three permutations of 1, 0, 0 for the entries . They satisfy the relations , which can also be understood from the picture. This is a non-commutative infinite group, and despite being embedded in a three-dimensional space, the Cayley graph has four-dimensional volume growth. Characterization The group acts on itself by left multiplication (see Cayley's theorem). This may be viewed as the action of on its Cayley graph. Explicitly, an element maps a vertex to the vertex The set of edges of the Cayley graph and their color is preserved by this action: the edge is mapped to the edge , both having color . In fact, all automorphisms of the colored directed graph are of this form, so that is isomorphic to the symmetry group of . The left multiplication action of a group on itself is simply transitive, in particular, Cayley graphs are vertex-transitive. The following is a kind of converse to this: To recover the group and the generating set from the unlabeled directed graph , select a vertex and label it by the identity element of the group. Then label each vertex of by the unique element of that maps to The set of generators of that yields as the Cayley graph is the set of labels of out-neighbors of . Since is uncolored, it might have more directed graph automorphisms than the left multiplication maps, for example group automorphisms of which permute . Elementary properties The Cayley graph depends in an essential way on the choice of the set of generators. For example, if the generating set has elements then each vertex of the Cayley graph has incoming and outgoing directed edges. In the case of a symmetric generating set with elements, the Cayley graph is a regular directed graph of degree Cycles (or closed walks) in the Cayley graph indicate relations among the elements of In the more elaborate construction of the Cayley complex of a group, closed paths corresponding to relations are "filled in" by polygons. This means that the problem of constructing the Cayley graph of a given presentation is equivalent to solving the Word Problem for . If is a surjective group homomorphism and the images of the elements of the generating set for are distinct, then it induces a covering of graphs where In particular, if a group has generators, all of order different from 2, and the set consists of these generators together with their inverses, then the Cayley graph is covered by the infinite regular tree of degree corresponding to the free group on the same set of generators. For any finite Cayley graph, considered as undirected, the vertex connectivity is at least equal to 2/3 of the degree of the graph. If the generating set is minimal (removal of any element and, if present, its inverse from the generating set leaves a set which is not generating), the vertex connectivity is equal to the degree. The edge connectivity is in all cases equal to the degree. If is the left-regular representation with matrix form denoted , the adjacency matrix of is . Every group character of the group induces an eigenvector of the adjacency matrix of . The associated eigenvalue is which, when is Abelian, takes the form for integers In particular, the associated eigenvalue of the trivial character (the one sending every element to 1) is the degree of , that is, the order of . If is an Abelian group, there are exactly characters, determining all eigenvalues. The corresponding orthonormal basis of eigenvectors is given by It is interesting to note that this eigenbasis is independent of the generating set . More generally for symmetric generating sets, take a complete set of irreducible representations of and let with eigenvalue set . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of Schreier coset graph If one instead takes the vertices to be right cosets of a fixed subgroup one obtains a related construction, the Schreier coset graph, which is at the basis of coset enumeration or the Todd–Coxeter process. Connection to group theory Knowledge about the structure of the group can be obtained by studying the adjacency matrix of the graph and in particular applying the theorems of spectral graph theory. Conversely, for symmetric generating sets, the spectral and representation theory of are directly tied together: take a complete set of irreducible representations of and let with eigenvalues . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of The genus of a group is the minimum genus for any Cayley graph of that group. Geometric group theory For infinite groups, the coarse geometry of the Cayley graph is fundamental to geometric group theory. For a finitely generated group, this is independent of choice of finite set of generators, hence an intrinsic property of the group. This is only interesting for infinite groups: every finite group is coarsely equivalent to a point (or the trivial group), since one can choose as finite set of generators the entire group. Formally, for a given choice of generators, one has the word metric (the natural distance on the Cayley graph), which determines a metric space. The coarse equivalence class of this space is an invariant of the group. Expansion properties When , the Cayley graph is -regular, so spectral techniques may be used to analyze the expansion properties of the graph. In particular for abelian groups, the eigenvalues of the Cayley graph are more easily computable and given by with top eigenvalue equal to , so we may use Cheeger's inequality to bound the edge expansion ratio using the spectral gap. Representation theory can be used to construct such expanding Cayley graphs, in the form of Kazhdan property (T). The following statement holds: For example the group has property (T) and is generated by elementary matrices and this gives relatively explicit examples of expander graphs. Integral classification An integral graph is one whose eigenvalues are all integers. While the complete classification of integral graphs remains an open problem, the Cayley graphs of certain groups are always integral. Using previous characterizations of the spectrum of Cayley graphs, note that is integral iff the eigenvalues of are integral for every representation of . Cayley integral simple group A group is Cayley integral simple (CIS) if the connected Cayley graph is integral exactly when the symmetric generating set is the complement of a subgroup of . A result of Ahmady, Bell, and Mohar shows that all CIS groups are isomorphic to , or for primes . It is important that actually generates the entire group in order for the Cayley graph to be connected. (If does not generate , the Cayley graph may still be integral, but the complement of is not necessarily a subgroup.) In the example of , the symmetric generating sets (up to graph isomorphism) are : is a -cycle with eigenvalues : is with eigenvalues The only subgroups of are the whole group and the trivial group, and the only symmetric generating set that produces an integral graph is the complement of the trivial group. Therefore must be a CIS group. The proof of the complete CIS classification uses the fact that every subgroup and homomorphic image of a CIS group is also a CIS group. Cayley integral group A slightly different notion is that of a Cayley integral group , in which every symmetric subset produces an integral graph . Note that no longer has to generate the entire group. The complete list of Cayley integral groups is given by , and the dicyclic group of order , where and is the quaternion group. The proof relies on two important properties of Cayley integral groups: Subgroups and homomorphic images of Cayley integral groups are also Cayley integral groups. A group is Cayley integral iff every connected Cayley graph of the group is also integral. Normal and Eulerian generating sets Given a general group , a subset is normal if is closed under conjugation by elements of (generalizing the notion of a normal subgroup), and is Eulerian if for every , the set of elements generating the cyclic group is also contained in . A 2019 result by Guo, Lytkina, Mazurov, and Revin proves that the Cayley graph is integral for any Eulerian normal subset , using purely representation theoretic techniques. The proof of this result is relatively short: given an Eulerian normal subset, select pairwise nonconjugate so that is the union of the conjugacy classes . Then using the characterization of the spectrum of a Cayley graph, one can show the eigenvalues of are given by taken over irreducible characters of . Each eigenvalue in this set must be an element of for a primitive root of unity (where must be divisible by the orders of each ). Because the eigenvalues are algebraic integers, to show they are integral it suffices to show that they are rational, and it suffices to show is fixed under any automorphism of . There must be some relatively prime to such that for all , and because is both Eulerian and normal, for some . Sending bijects conjugacy classes, so and have the same size and merely permutes terms in the sum for . Therefore is fixed for all automorphisms of , so is rational and thus integral. Consequently, if is the alternating group and is a set of permutations given by , then the Cayley graph is integral. (This solved a previously open problem from the Kourovka Notebook.) In addition when is the symmetric group and is either the set of all transpositions or the set of transpositions involving a particular element, the Cayley graph is also integral. History Cayley graphs were first considered for finite groups by Arthur Cayley in 1878. Max Dehn in his unpublished lectures on group theory from 1909–10 reintroduced Cayley graphs under the name Gruppenbild (group diagram), which led to the geometric group theory of today. His most important application was the solution of the word problem for the fundamental group of surfaces with genus ≥ 2, which is equivalent to the topological problem of deciding which closed curves on the surface contract to a point.
Mathematics
Graph theory
null
358490
https://en.wikipedia.org/wiki/Avoidant%20personality%20disorder
Avoidant personality disorder
Avoidant personality disorder (AvPD), or anxious personality disorder, is a cluster C personality disorder characterized by excessive social anxiety and inhibition, fear of intimacy (despite an intense desire for it), severe feelings of inadequacy and inferiority, and an overreliance on avoidance of feared stimuli (e.g., self-imposed social isolation) as a maladaptive coping method. Those affected typically display a pattern of extreme sensitivity to negative evaluation and rejection, a belief that one is socially inept or personally unappealing to others, and avoidance of social interaction despite a strong desire for it. It appears to affect an approximately equal number of men and women. People with AvPD often avoid social interaction for fear of being ridiculed, humiliated, rejected, or disliked. They typically avoid becoming involved with others unless they are certain they will not be rejected, and may also pre-emptively abandon relationships due to fear of a real or imagined risk of being rejected by the other party. Childhood emotional neglect (in particular, the rejection of a child by one or both parents) and peer group rejection are associated with an increased risk for its development; however, it is possible for AvPD to occur without any notable history of abuse or neglect. Signs and symptoms Avoidant individuals are preoccupied with their own shortcomings and form relationships with others only if they believe they will not be rejected. They often view themselves with contempt, while showing a decreased ability to identify traits within themselves that are generally considered as positive within their societies. Loss and social rejection are so painful that these individuals will choose to be alone rather than risk trying to connect with others. Some with this disorder fantasize about idealized, accepting, and affectionate relationships because of their desire to belong. They often feel themselves unworthy of the relationships they desire, and shame themselves from ever attempting to begin them. If they do manage to form relationships, it is also common for them to pre-emptively abandon them out of fear of the relationship failing. Individuals with the disorder tend to describe themselves as uneasy, anxious, lonely, unwanted and isolated from others. They often choose jobs of isolation in which they do not have to interact with others regularly. Avoidant individuals also avoid performing activities in public spaces for fear of embarrassing themselves in front of others. Symptoms include: Extreme shyness or anxiety in social situations Heightened attachment-related anxiety, which may include a fear of abandonment Substance use disorders Comorbidity AvPD is reported to be especially prevalent in people with anxiety disorders, although estimates of comorbidity vary widely due to differences in (among others) diagnostic instruments. Research suggests that approximately 10–50% of people who have panic disorder with agoraphobia have avoidant personality disorder, as well as about 20–40% of people who have social anxiety disorder. In addition to this, AvPD is more prevalent in people who have comorbid social anxiety disorder and generalised anxiety disorder than in those who have only one of the aforementioned conditions. Some studies report prevalence rates of up to 45% among people with generalized anxiety disorder and up to 56% of those with obsessive–compulsive disorder. Post-traumatic stress disorder is also commonly comorbid with avoidant personality disorder. Avoidants are prone to self-loathing and, in certain cases, self-harm. Substance use disorders are also common in individuals with AvPD—particularly in regard to alcohol, benzodiazepines, and opioids—and may significantly affect a patient's prognosis. Earlier theorists proposed a personality disorder with a combination of features from borderline personality disorder (BPD) and avoidant personality disorder, called "avoidant-borderline mixed personality" (AvPD/BPD). Causes Causes of AvPD are not clearly defined, but appear to be influenced by a combination of social, genetic and psychological factors. The disorder may be related to temperamental factors that are inherited. Specifically, various anxiety disorders in childhood and adolescence have been associated with a temperament characterized by behavioral inhibition, including features of being shy, fearful and withdrawn in new situations. These inherited characteristics may give an individual a genetic predisposition towards AvPD. Childhood emotional neglect and peer group rejection are both associated with an increased risk for the development of AvPD. Some researchers believe a combination of high-sensory-processing sensitivity coupled with adverse childhood experiences may heighten the risk of an individual developing AvPD. Subtypes Millon's subtypes Psychologist Theodore Millon notes that because most patients present a mixed picture of symptoms, their personality disorder tends to be a blend of a major personality disorder type with one or more secondary personality disorder types. He identified four adult subtypes of avoidant personality disorder. Others In 1993, Lynn E. Alden and Martha J. Capreol proposed two other subtypes of avoidant personality disorder: Diagnosis ICD The World Health Organization's ICD-10 lists avoidant personality disorder as anxious (avoidant) personality disorder (). It is characterized by the presence of at least four of the following: persistent and pervasive feelings of tension and apprehension; belief that one is socially inept, personally unappealing, or inferior to others; excessive preoccupation with being criticized or rejected in social situations; unwillingness to become involved with people unless certain of being liked; restrictions in lifestyle because of need to have physical security; avoidance of social or occupational activities that involve significant interpersonal contact because of fear of criticism, disapproval, or rejection. Associated features may include hypersensitivity to rejection and criticism. It is a requirement of ICD-10 that all personality disorder diagnoses also satisfy a set of general personality disorder criteria. DSM The Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association also has an avoidant personality disorder diagnosis (301.82). It refers to a widespread pattern of inhibition around people, feeling inadequate and being very sensitive to negative evaluation. Symptoms begin by early adulthood and occur in a range of situations. Four of the following seven specific symptoms should be present: Avoids occupational activities that involve significant interpersonal contact, because of fears of criticism, disapproval, or rejection is unwilling to get involved with people unless certain of being liked shows restraint within intimate relationships because of the fear of being shamed or ridiculed is preoccupied with being criticized or rejected in social situations is inhibited in new interpersonal situations because of feelings of inadequacy views self as socially inept, personally unappealing, or inferior to others is unusually reluctant to take personal risk or to engage in any new activities because they may prove embarrassing Differential diagnosis In contrast to social anxiety disorder, a diagnosis of avoidant personality disorder (AvPD) also requires that the general criteria for a personality disorder be met. According to the DSM-5, avoidant personality disorder must be differentiated from similar personality disorders such as dependent, paranoid, schizoid, and schizotypal. But these can also occur together; this is particularly likely for AvPD and dependent personality disorder. Thus, if criteria for more than one personality disorder are met, all can be diagnosed. There is also an overlap between avoidant and schizoid personality traits and AvPD may have a relationship to the schizophrenia spectrum. Avoidant personality disorder must also be differentiated from autism spectrum disorder. Treatment Treatment of avoidant personality disorder can employ various techniques, such as social skills training, psychotherapy, cognitive therapy, and exposure treatment to gradually increase social contacts, group therapy for practicing social skills, and sometimes drug therapy. A key issue in treatment is gaining and keeping the patient's trust since people with an avoidant personality disorder will often start to avoid treatment sessions if they distrust the therapist or fear rejection. The primary purpose of both individual therapy and social skills group training is for individuals with an avoidant personality disorder to begin challenging their exaggerated negative beliefs about themselves. Significant improvement in the symptoms of personality disorders is possible, with the help of treatment and individual effort. Prognosis Being a personality disorder, which is usually chronic and has long-lasting mental conditions, an avoidant personality disorder may not improve with time without treatment. Given that it is a poorly studied personality disorder and in light of prevalence rates, societal costs, and the current state of research, AvPD qualifies as a neglected disorder. Controversy There is debate as to whether avoidant personality disorder (AvPD) is distinct from social anxiety disorder. Both have similar diagnostic criteria and may share a similar causation, subjective experience, course, treatment and identical underlying personality features, such as shyness. It is contended by some that they are merely different conceptualizations of the same disorder, where avoidant personality disorder may represent the more severe form. In particular, those with AvPD experience not only more severe social phobia symptoms, but are also more depressed and more functionally impaired than patients with generalized social phobia alone. But they show no differences in social skills or performance on an impromptu speech. Another difference is that social phobia is the fear of social circumstances whereas AvPD is better described as an aversion to intimacy in relationships. Epidemiology Data from the 2001–02 National Epidemiologic Survey on Alcohol and Related Conditions indicates a prevalence of 2.36% in the American general population. It appears to occur with equal frequency in males and females. In one study, it was seen in 14.7% of psychiatric outpatients. History The avoidant personality has been described in several sources as far back as the early 1900s, although it was not so named for some time. Swiss psychiatrist Eugen Bleuler described patients who exhibited signs of avoidant personality disorder in his 1911 work Dementia Praecox: Or the Group of Schizophrenias. Avoidant and schizoid patterns were frequently confused or referred to synonymously until Kretschmer (1921), in providing the first relatively complete description, developed a distinction.
Biology and health sciences
Mental disorders
Health
358970
https://en.wikipedia.org/wiki/BeiDou
BeiDou
The BeiDou Navigation Satellite System (BDS; ) is a satellite-based radio navigation system owned and operated by the China National Space Administration. It provides geolocation and time information to a BDS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more BDS satellites. It does not require the user to transmit any data and operates independently of any telephonic or Internet reception, though these technologies can enhance the usefulness of the BDS positioning information; however, concerns have been raised about embedded malware leaking information in this way. The current service, BeiDou-3 (third-generation BeiDou), provides full global coverage for timing and navigation, along with Russia's GLONASS, the European Galileo, and the US's GPS. It consists of satellites in three different orbits, including 24 satellites in medium-circle orbits (covering the world), 3 satellites in inclined geosynchronous orbits (covering the Asia-Pacific region), and 3 satellites in geostationary orbits (covering China). The BeiDou-3 system was fully operational in July 2020. In 2016, BeiDou-3 reached millimeter-level accuracy with post-processing. Predecessors included BeiDou-1 (first-generation BeiDou), consisting of three satellites in a regional satellite navigation system. Since 2000, the system has mainly provided navigation services within China. In December 2012, as the design life of BeiDou-1 expired, it stopped operating. The BeiDou-2 (second-generation BeiDou) system was also a regional satellite navigation system containing 16 satellites, including 6 geostationary satellites, 6 inclined geosynchronous orbit satellites, and 4 medium earth orbit satellites. In November 2012, BeiDou-2 began to provide users with regional positioning services in the Asia-Pacific region. Within the region, BeiDou is more accurate than GPS. In 2015, fifteen years after the satellite system was launched, it was generating a turnover of $31.5 billion per annum for major companies such as China Aerospace Science and Industry Corporation, AutoNavi, and Norinco. The industry has grown an average of over 20% in value annually to reach $64 billion in 2020. Nomenclature The official English name of the system is BeiDou Navigation Satellite System. It is named after the Big Dipper asterism, which is known in Chinese as (). The name literally means "Northern Dipper", the name given by ancient Chinese astronomers to the seven brightest stars of the Ursa Major constellation. Historically, this set of stars was used in navigation to locate the North Star. As such, the name BeiDou also serves as a metaphor for the purpose of the satellite navigation system. History Conception and initial development The original idea of a Chinese satellite navigation system was conceived by Chen Fangyun and his colleagues in the 1980s. The Gulf War in 1991 showcased how the GPS gave the US complete advantage on the battlefield and how satellite navigation systems can be used to conduct "space warfare". In 1993, China realised the risk of denied access to GPS during the Yinhe incident and including an alleged case in 1996 during the Third Taiwan Strait Crisis, gave impetus to the creation of its own indigenous satellite navigation system which officially began in 1994. According to the China National Space Administration, in 2010, the development of the system would be carried out in three steps: 2000–2003: experimental BeiDou navigation system consisting of three satellites By 2012: regional BeiDou navigation system covering China and neighboring regions By 2020: global BeiDou navigation system The first satellite, BeiDou-1A, was launched on 30 October 2000, followed by BeiDou-1B on 20 December 2000. The third satellite, BeiDou-1C (a backup satellite), was put into orbit on 25 May 2003. The successful launch of BeiDou-1C also meant the establishment of the BeiDou-1 navigation system. On 2 November 2006, China announced that from 2008 BeiDou would offer an open service with an accuracy of 10 metres, timing of 0.2 microseconds, and speed of 0.2 metres/second. In February 2007, the fourth and last satellite of the BeiDou-1 system, BeiDou-1D (sometimes called BeiDou-2A, serving as a backup satellite), was launched. It was reported that the satellite had suffered from a control system malfunction but was then fully restored. In April 2007, the first satellite of BeiDou-2, namely Compass-M1 (to validate frequencies for the BeiDou-2 constellation) was successfully put into its working orbit. The second BeiDou-2 constellation satellite Compass-G2 was launched on 15 April 2009. On 15 January 2010, the official website of the BeiDou Navigation Satellite System went online, and the system's third satellite (Compass-G1) was carried into its orbit by a Long March 3C rocket on 17 January 2010. On 2 June 2010, the fourth satellite was launched successfully into orbit. The fifth orbiter was launched into space from Xichang Satellite Launch Center by an LM-3I carrier rocket on 1 August 2010. Three months later, on 1 November 2010, the sixth satellite was sent into orbit by LM-3C. Another satellite, the BeiDou-2/Compass IGSO-5 (fifth inclined geosynchronous orbit) satellite, was launched from the Xichang Satellite Launch Center by a Long March 3A on 1 December 2011 (UTC). Chinese involvement in Galileo system In September 2003, China intended to join the European Galileo positioning system project and was to invest €230 million (US$296 million, £160 million) in Galileo over the next few years. At the time, it was believed that China's "BeiDou" navigation system would then only be used by its armed forces. In October 2004, China officially joined the Galileo project by signing the Agreement on the Cooperation in the Galileo Program between the "Galileo Joint Undertaking" (GJU) and the "National Remote Sensing Centre of China" (NRSCC). Based on the Sino-European Cooperation Agreement on Galileo program, China Galileo Industries (CGI), the prime contractor of China's involvement in Galileo programs, was founded in December 2004. By April 2006, eleven cooperation projects within the Galileo framework had been signed between China and the EU. Phase III In November 2014, BeiDou became part of the World-Wide Radionavigation System (WWRNS) at the 94th meeting of the International Maritime Organization (IMO) Maritime Safety Committee, which approved the "Navigation Safety Circular" of the BeiDou Navigation Satellite System (BDS). At Beijing time 21:52, 30 March 2015, the first new-generation BeiDou Navigation satellite (and the 17th overall) was successfully set to orbit by a Long March 3C rocket. On 20 April 2019, a BeiDou satellite was successfully launched. Launch occurred at 22:41 Beijing time, and the Long March 3B delivered the BeiDou navigation payload into an elliptical transfer orbit ranging between 220 kilometres and 35,787 kilometres, with an inclination of 28.5° to the equator, according to U.S. military tracking data. On 23 June 2020, the final BeiDou satellite was successfully launched, the launch of the 55th satellite in the BeiDou family. The third iteration of the BeiDou Navigation Satellite System provides global coverage for timing and navigation, offering an alternative to Russia's GLONASS and the European Galileo positioning system, as well as the US's GPS. Use outside China In 2018, the Pakistan Armed Forces received access to BeiDou for military purposes. In 2019, the Saudi Ministry of Defense signed an agreement for military use of BeiDou. In 2020, Argentina entered into a cooperation agreement with China regarding the use of BeiDou. In 2021, the first China-Africa BeiDou System Cooperation Forum was held in Beijing. In 2022, Vladimir Putin signed an agreement for the interoperability of BeiDou and GLONASS. GPS vs. BeiDou Capabilities The National Space-Based Positioning, Navigation, and Timing (PNT) Advisory Board, which offers independent guidance to the U.S. government on GPS policy, issued a summary report from its 27th meeting held on November 16–17, 2022. During the meeting, it was highlighted that "GPS capabilities are now significantly surpassed by China's BeiDou system." BeiDou-3 The third phase of the BeiDou system (BDS-3) includes three GEO satellites, three IGSO satellites, and twenty-four MEO satellites which introduce new signal frequencies B1C/B1I/B1A (1575.42MHz), B2a/B2b (1191.79MHz), B3I/B3Q/B3A (1268.52MHz), and Bs test frequency (2492.02MHz). Interface control documents on the new open signals were published in 2017–2018. On 23 June 2020, the BDS-3 constellation deployment was fully completed after the last satellite was successfully launched at the Xichang Satellite Launch Center. BDS-3 satellites also include SBAS (B1C, B2a, B1A - GEO sats), Precise Point Positioning (B2b - GEO sats), and search and rescue transponder (6 MEOSAR) capabilities. Characteristics of the "I" signals on E2 and E5B are generally similar to the civilian codes of GPS (L1-CA and L2C), but Compass signals have somewhat greater power. The notation of Compass signals used in this page follows the naming of the frequency bands and agrees with the notation used in the American literature on the subject, but the notation used by the Chinese seems to be different. There has also been an experimental S band broadcast called "Bs" at 2492.028 MHz, following similar experiments on BeiDou-1. Predecessors BeiDou-1 BeiDou-1 was an experimental regional navigation system, which consisted of four satellites (three working satellites and one backup satellite). The satellites themselves were based on the Chinese DFH-3 geostationary communications satellite and had a launch weight of 1,000 kg each. Unlike the American GPS, Russian GLONASS, and European Galileo systems, which use medium Earth orbit satellites, BeiDou-1 used satellites in geostationary orbit. This means that the system does not require a large constellation of satellites, but it also limits the coverage to areas on Earth where the satellites are visible. The area that can be serviced is from longitude 70° E to 140° E and from latitude 5° N to 55° N. The frequency of the system is 2,491.75 MHz. Completion The first satellite, BeiDou-1A, was launched on 31 October 2000. The second satellite, BeiDou-1B, was successfully launched on 21 December 2000. The last operational satellite of the constellation, BeiDou-1C, was launched on 25 May 2003. Position calculation In 2007, the official Xinhua News Agency reported that the resolution of the BeiDou system was as high as 0.5 metre. With the existing user terminals it appears that the calibrated accuracy is 20 m (100 m, uncalibrated). Terminals In 2008, a BeiDou-1 ground terminal cost around (), almost 10 times the price of a contemporary GPS terminal. The price of the terminals was explained as being due to the cost of imported microchips. At the China High-Tech Fair ELEXCON of November 2009 in Shenzhen, a BeiDou terminal priced at was presented. Applications Over 1000 BeiDou-1 terminals were used after the 2008 Sichuan earthquake, providing information from the disaster area. As of October 2009, all Chinese border guards in Yunnan were equipped with BeiDou-1 devices. Sun Jiadong, the chief designer of the navigation system, said in 2010 that "Many organizations have been using our system for a while, and they like it very much". Decommissioning BeiDou-1 was decommissioned at the end of 2012, after the BeiDou-2 system became operational. BeiDou-2 BeiDou-2 (formerly known as COMPASS) is not an extension to the older BeiDou-1, but rather supersedes it outright. The new system is a constellation of 35 satellites, which include 5 geostationary orbit satellites for backward compatibility with BeiDou-1, and 30 non-geostationary satellites (27 in medium Earth orbit and 3 in inclined geosynchronous orbit), that offer complete coverage of the globe. The ranging signals are based on the CDMA principle and have complex structure typical of Galileo or modernized GPS. Similar to the other global navigation satellite systems (GNSSs), there are two levels of positioning service: open (public) and restricted (military). The public service is available globally to general users. When all the currently planned GNSSs are deployed, users of multi-constellation receivers will benefit from a total over 100 satellites, which will significantly improve all aspects of positioning, especially availability of the signals in so-called urban canyons. The general designer of the COMPASS navigation system is Sun Jiadong, who is also the general designer of its predecessor, the original BeiDou navigation system. All BeiDou satellites are equipped with laser retroreflector arrays for satellite laser ranging and the verification of the orbit quality. Accuracy There are two levels of service provided – a free service to civilians and licensed service to the Chinese government and military. The free civilian service has a 10-metre location-tracking accuracy, synchronizes clocks with an accuracy of 10 nanoseconds, and measures speeds to within 0.2 m/s. The restricted military service has a location accuracy of 10 cm, can be used for communication, and will supply information about the system status to the user. In 2019, the International GNSS Service started providing precise orbits of BeiDou satellites in experimental products. To date, the military service has been granted only to the People's Liberation Army and to the Pakistan Armed Forces. Frequencies Frequencies for COMPASS are allocated in four bands: E1, E2, E5B, and E6; they overlap with Galileo. The fact of overlapping could be convenient from the point of view of the receiver design, but on the other hand raises the issues of system interference, especially within E1 and E2 bands, which are allocated for Galileo's publicly regulated service. However, under International Telecommunication Union (ITU) policies, the first nation to start broadcasting in a specific frequency will have priority to that frequency, and any subsequent users will be required to obtain permission prior to using that frequency, and otherwise ensure that their broadcasts do not interfere with the original nation's broadcasts. As of 2009, it appeared that Chinese COMPASS satellites would start transmitting in the E1, E2, E5B, and E6 bands before Europe's Galileo satellites and thus have primary rights to these frequency ranges. Compass-M1 Compass-M1 is an experimental satellite launched for signal testing and validation and for the frequency filing on 14 April 2007. The role of Compass-M1 for Compass is similar to the role of the GIOVE satellites for the Galileo system. The orbit of Compass-M1 is nearly circular, has an altitude of 21,150 km and an inclination of 55.5°. The investigation of the transmitted signals started immediately after the launch of Compass-M1 on 14 April 2007. Soon after in June 2007, engineers at CNES reported the spectrum and structure of the signals. A month later, researchers from Stanford University reported the complete decoding of the "I" signals components. The knowledge of the codes allowed a group of engineers at Septentrio to build the COMPASS receiver and report tracking and multipath characteristics of the "I" signals on E2 and E5B. Operation In December 2011, the system went into operation on a trial basis. It started providing navigation, positioning and timing data to China and the neighbouring area for free from 27 December 2011. During this trial run, Compass offered positioning accuracy to within 25 metres and the precision improved as more satellites were launched. Upon the system's official launch, it pledged to offer general users positioning information accurate to the nearest 10 m, measure speeds within 0.2 metres per second, and provide signals for clock synchronisation accurate to 0.02 microseconds. The BeiDou-2 system began offering services for the Asia-Pacific region in December 2012. At this time, the system could provide positioning data between longitude 55° E to 180° E and from latitude 55° S to 55° N. The new-generation BeiDou satellites support short message service. Completion In December 2011, Xinhua stated that "[t]he basic structure of the BeiDou system has now been established, and engineers are now conducting comprehensive system test and evaluation. The system will provide test-run services of positioning, navigation and time for China and the neighboring areas before the end of this year, according to the authorities". The system became operational in the China region that same month. The global navigation system should be finished by 2020. As of December 2012, 16 satellites for BeiDou-2 had been launched, with 14 in service. As of December 2017, 150 million Chinese smartphones (20% of the market) were equipped to utilize BeiDou. Constellations The regional BeiDou-1 system was decommissioned at the end of 2012. The first satellite of the second-generation system, Compass-M1 was launched in 2007. It was followed by further nine satellites during 2009–2011, achieving functional regional coverage. A total of 16 satellites were launched during this phase. In 2015, the system began its transition towards global coverage with the first launch of a new-generation of satellites, and the 17th one within the new system. On 25 July 2015, the 18th and 19th satellites were successfully launched from the Xichang Satellite Launch Center, marking the first time for China to launch two satellites at once on top of a Long March 3B/Expedition 1 carrier rocket. The Expedition-1 is an independent upper stage capable of delivering one or more spacecraft into different orbits. On 29 September 2015, the 20th satellite was launched, carrying a hydrogen maser for the first time within the system. In 2016, the 21st, 22nd and 23rd satellites were launched from Xichang Satellite Launch Center, the last two of which entered into service on 5 August and 30 November, respectively. Orbital period: 12 hours and 53 minutes (every 13 revolutions, done in 7 sidereal days, a satellite passes over the same location). Prohibitions In 2018, Taiwan's National Communications Commission announced that it would be illegal to use BeiDou products in Taiwan without its approval.
Technology
Navigation
null
359135
https://en.wikipedia.org/wiki/Chemical%20kinetics
Chemical kinetics
Chemical kinetics, also known as reaction kinetics, is the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. It is different from chemical thermodynamics, which deals with the direction in which a reaction occurs but in itself tells nothing about its rate. Chemical kinetics includes investigations of how experimental conditions influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that also can describe the characteristics of a chemical reaction. History The pioneering work of chemical kinetics was done by German chemist Ludwig Wilhelmy in 1850. He experimentally studied the rate of inversion of sucrose and he used integrated rate law for the determination of the reaction kinetics of this reaction. His work was noticed 34 years later by Wilhelm Ostwald. In 1864, Peter Waage and Cato Guldberg published the law of mass action, which states that the speed of a chemical reaction is proportional to the quantity of the reacting substances. Van 't Hoff studied chemical dynamics and in 1884 published his famous "Études de dynamique chimique". In 1901 he was awarded the first Nobel Prize in Chemistry "in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions". After van 't Hoff, chemical kinetics dealt with the experimental determination of reaction rates from which rate laws and rate constants are derived. Relatively simple rate laws exist for zero order reactions (for which reaction rates are independent of concentration), first order reactions, and second order reactions, and can be derived for others. Elementary reactions follow the law of mass action, but the rate law of stepwise reactions has to be derived by combining the rate laws of the various elementary steps, and can become rather complex. In consecutive reactions, the rate-determining step often determines the kinetics. In consecutive first order reactions, a steady state approximation can simplify the rate law. The activation energy for a reaction is experimentally determined through the Arrhenius equation and the Eyring equation. The main factors that influence the reaction rate include: the physical state of the reactants, the concentrations of the reactants, the temperature at which the reaction occurs, and whether or not any catalysts are present in the reaction. Gorban and Yablonsky have suggested that the history of chemical dynamics can be divided into three eras. The first is the van 't Hoff wave searching for the general laws of chemical reactions and relating kinetics to thermodynamics. The second may be called the Semenov-Hinshelwood wave with emphasis on reaction mechanisms, especially for chain reactions. The third is associated with Aris and the detailed mathematical description of chemical reaction networks. Factors affecting reaction rate Nature of the reactants The reaction rate varies depending upon what substances are reacting. Acid/base reactions, the formation of salts, and ion exchange are usually fast reactions. When covalent bond formation takes place between the molecules and when large molecules are formed, the reactions tend to be slower. The nature and strength of bonds in reactant molecules greatly influence the rate of their transformation into products. Physical state The physical state (solid, liquid, or gas) of a reactant is also an important factor of the rate of change. When reactants are in the same phase, as in aqueous solution, thermal motion brings them into contact. However, when they are in separate phases, the reaction is limited to the interface between the reactants. Reaction can occur only at their area of contact; in the case of a liquid and a gas, at the surface of the liquid. Vigorous shaking and stirring may be needed to bring the reaction to completion. This means that the more finely divided a solid or liquid reactant the greater its surface area per unit volume and the more contact it with the other reactant, thus the faster the reaction. To make an analogy, for example, when one starts a fire, one uses wood chips and small branches — one does not start with large logs right away. In organic chemistry, on water reactions are the exception to the rule that homogeneous reactions take place faster than heterogeneous reactions (those in which solute and solvent are not mixed properly). Surface area of solid state In a solid, only those particles that are at the surface can be involved in a reaction. Crushing a solid into smaller parts means that more particles are present at the surface, and the frequency of collisions between these and reactant particles increases, and so reaction occurs more rapidly. For example, Sherbet (powder) is a mixture of very fine powder of malic acid (a weak organic acid) and sodium hydrogen carbonate. On contact with the saliva in the mouth, these chemicals quickly dissolve and react, releasing carbon dioxide and providing for the fizzy sensation. Also, fireworks manufacturers modify the surface area of solid reactants to control the rate at which the fuels in fireworks are oxidised, using this to create diverse effects. For example, finely divided aluminium confined in a shell explodes violently. If larger pieces of aluminium are used, the reaction is slower and sparks are seen as pieces of burning metal are ejected. Concentration The reactions are due to collisions of reactant species. The frequency with which the molecules or ions collide depends upon their concentrations. The more crowded the molecules are, the more likely they are to collide and react with one another. Thus, an increase in the concentrations of the reactants will usually result in the corresponding increase in the reaction rate, while a decrease in the concentrations will usually have a reverse effect. For example, combustion will occur more rapidly in pure oxygen than in air (21% oxygen). The rate equation shows the detailed dependence of the reaction rate on the concentrations of reactants and other species present. The mathematical forms depend on the reaction mechanism. The actual rate equation for a given reaction is determined experimentally and provides information about the reaction mechanism. The mathematical expression of the rate equation is often given by Here is the reaction rate constant, is the molar concentration of reactant i and is the partial order of reaction for this reactant. The partial order for a reactant can only be determined experimentally and is often not indicated by its stoichiometric coefficient. Temperature Temperature usually has a major effect on the rate of a chemical reaction. Molecules at a higher temperature have more thermal energy. Although collision frequency is greater at higher temperatures, this alone contributes only a very small proportion to the increase in rate of reaction. Much more important is the fact that the proportion of reactant molecules with sufficient energy to react (energy greater than activation energy: E > Ea) is significantly higher and is explained in detail by the Maxwell–Boltzmann distribution of molecular energies. The effect of temperature on the reaction rate constant usually obeys the Arrhenius equation , where A is the pre-exponential factor or A-factor, Ea is the activation energy, R is the molar gas constant and T is the absolute temperature. At a given temperature, the chemical rate of a reaction depends on the value of the A-factor, the magnitude of the activation energy, and the concentrations of the reactants. Usually, rapid reactions require relatively small activation energies. The 'rule of thumb' that the rate of chemical reactions doubles for every 10 °C temperature rise is a common misconception. This may have been generalized from the special case of biological systems, where the α (temperature coefficient) is often between 1.5 and 2.5. The kinetics of rapid reactions can be studied with the temperature jump method. This involves using a sharp rise in temperature and observing the relaxation time of the return to equilibrium. A particularly useful form of temperature jump apparatus is a shock tube, which can rapidly increase a gas's temperature by more than 1000 degrees. Catalysts A catalyst is a substance that alters the rate of a chemical reaction but it remains chemically unchanged afterwards. The catalyst increases the rate of the reaction by providing a new reaction mechanism to occur with in a lower activation energy. In autocatalysis a reaction product is itself a catalyst for that reaction leading to positive feedback. Proteins that act as catalysts in biochemical reactions are called enzymes. Michaelis–Menten kinetics describe the rate of enzyme mediated reactions. A catalyst does not affect the position of the equilibrium, as the catalyst speeds up the backward and forward reactions equally. In certain organic molecules, specific substituents can have an influence on reaction rate in neighbouring group participation. Pressure Increasing the pressure in a gaseous reaction will increase the number of collisions between reactants, increasing the rate of reaction. This is because the activity of a gas is directly proportional to the partial pressure of the gas. This is similar to the effect of increasing the concentration of a solution. In addition to this straightforward mass-action effect, the rate coefficients themselves can change due to pressure. The rate coefficients and products of many high-temperature gas-phase reactions change if an inert gas is added to the mixture; variations on this effect are called fall-off and chemical activation. These phenomena are due to exothermic or endothermic reactions occurring faster than heat transfer, causing the reacting molecules to have non-thermal energy distributions (non-Boltzmann distribution). Increasing the pressure increases the heat transfer rate between the reacting molecules and the rest of the system, reducing this effect. Condensed-phase rate coefficients can also be affected by pressure, although rather high pressures are required for a measurable effect because ions and molecules are not very compressible. This effect is often studied using diamond anvils. A reaction's kinetics can also be studied with a pressure jump approach. This involves making fast changes in pressure and observing the relaxation time of the return to equilibrium. Absorption of light The activation energy for a chemical reaction can be provided when one reactant molecule absorbs light of suitable wavelength and is promoted to an excited state. The study of reactions initiated by light is photochemistry, one prominent example being photosynthesis. Experimental methods The experimental determination of reaction rates involves measuring how the concentrations of reactants or products change over time. For example, the concentration of a reactant can be measured by spectrophotometry at a wavelength where no other reactant or product in the system absorbs light. For reactions which take at least several minutes, it is possible to start the observations after the reactants have been mixed at the temperature of interest. Fast reactions For faster reactions, the time required to mix the reactants and bring them to a specified temperature may be comparable or longer than the half-life of the reaction. Special methods to start fast reactions without slow mixing step include Stopped flow methods, which can reduce the mixing time to the order of a millisecond The stopped flow methods have limitation, for example, we need to consider the time it takes to mix gases or solutions and are not suitable if the half-life is less than about a hundredth of a second. Chemical relaxation methods such as temperature jump and pressure jump, in which a pre-mixed system initially at equilibrium is perturbed by rapid heating or depressurization so that it is no longer at equilibrium, and the relaxation back to equilibrium is observed. For example, this method has been used to study the neutralization H3O+ + OH− with a half-life of 1 μs or less under ordinary conditions. Flash photolysis, in which a laser pulse produces highly excited species such as free radicals, whose reactions are then studied. Equilibrium While chemical kinetics is concerned with the rate of a chemical reaction, thermodynamics determines the extent to which reactions occur. In a reversible reaction, chemical equilibrium is reached when the rates of the forward and reverse reactions are equal (the principle of dynamic equilibrium) and the concentrations of the reactants and products no longer change. This is demonstrated by, for example, the Haber–Bosch process for combining nitrogen and hydrogen to produce ammonia. Chemical clock reactions such as the Belousov–Zhabotinsky reaction demonstrate that component concentrations can oscillate for a long time before finally attaining the equilibrium. Free energy In general terms, the free energy change (ΔG) of a reaction determines whether a chemical change will take place, but kinetics describes how fast the reaction is. A reaction can be very exothermic and have a very positive entropy change but will not happen in practice if the reaction is too slow. If a reactant can produce two products, the thermodynamically most stable one will form in general, except in special circumstances when the reaction is said to be under kinetic reaction control. The Curtin–Hammett principle applies when determining the product ratio for two reactants interconverting rapidly, each going to a distinct product. It is possible to make predictions about reaction rate constants for a reaction from free-energy relationships. The kinetic isotope effect is the difference in the rate of a chemical reaction when an atom in one of the reactants is replaced by one of its isotopes. Chemical kinetics provides information on residence time and heat transfer in a chemical reactor in chemical engineering and the molar mass distribution in polymer chemistry. It is also provides information in corrosion engineering. Applications and models The mathematical models that describe chemical reaction kinetics provide chemists and chemical engineers with tools to better understand and describe chemical processes such as food decomposition, microorganism growth, stratospheric ozone decomposition, and the chemistry of biological systems. These models can also be used in the design or modification of chemical reactors to optimize product yield, more efficiently separate products, and eliminate environmentally harmful by-products. When performing catalytic cracking of heavy hydrocarbons into gasoline and light gas, for example, kinetic models can be used to find the temperature and pressure at which the highest yield of heavy hydrocarbons into gasoline will occur. Chemical Kinetics is frequently validated and explored through modeling in specialized packages as a function of ordinary differential equation-solving (ODE-solving) and curve-fitting. Numerical methods In some cases, equations are unsolvable analytically, but can be solved using numerical methods if data values are given. There are two different ways to do this, by either using software programmes or mathematical methods such as the Euler method. Examples of software for chemical kinetics are i) Tenua, a Java app which simulates chemical reactions numerically and allows comparison of the simulation to real data, ii) Python coding for calculations and estimates and iii) the Kintecus software compiler to model, regress, fit and optimize reactions. -Numerical integration: for a 1st order reaction A → B The differential equation of the reactant A is: It can also be expressed as which is the same as To solve the differential equations with Euler and Runge-Kutta methods we need to have the initial values.
Physical sciences
Chemical reactions
null
359238
https://en.wikipedia.org/wiki/Prescription%20drug
Prescription drug
A prescription drug (also prescription medication, prescription medicine or prescription-only medication) is a pharmaceutical drug that is permitted to be dispensed only to those with a medical prescription. In contrast, over-the-counter drugs can be obtained without a prescription. The reason for this difference in substance control is the potential scope of misuse, from drug abuse to practicing medicine without a license and without sufficient education. Different jurisdictions have different definitions of what constitutes a prescription drug. In North America, , usually printed as "Rx", is used as an abbreviation of the word "prescription". It is a contraction of the Latin word "recipe" (an imperative form of "recipere") meaning "take". Prescription drugs are often dispensed together with a monograph (in Europe, a Patient Information Leaflet or PIL) that gives detailed information about the drug. The use of prescription drugs has been increasing since the 1960s. Regulation Australia In Australia, the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) governs the manufacture and supply of drugs with several categories: Schedule 1 – Defunct Drug. Schedule 2 – Pharmacy Medicine Schedule 3 – Pharmacist-Only Medicine Schedule 4 – Prescription-Only Medicine/Prescription Animal Remedy Schedule 5 – Caution/Poison. Schedule 6 – Poison Schedule 7 – Dangerous Poison Schedule 8 – Controlled Drug (Possession without authority illegal) Schedule 9 – Prohibited Substance (Possession illegal without a license legal only for research purposes) Schedule 10 – Controlled Poison. Unscheduled Substances. As in other developed countries, the person requiring a prescription drug attends the clinic of a qualified health practitioner, such as a physician, who may write the prescription for the required drug. Many prescriptions issued by health practitioners in Australia are covered by the Pharmaceutical Benefits Scheme, a scheme that provides subsidised prescription drugs to residents of Australia to ensure that all Australians have affordable and reliable access to a wide range of necessary medicines. When purchasing a drug under the PBS, the consumer pays no more than the patient co-payment contribution, which, as of January 1, 2022, is A$42.50 for general patients. Those covered by government entitlements (low-income earners, welfare recipients, Health Care Card holders, etc.) and or under the Repatriation Pharmaceutical Benefits Scheme (RPBS) have a reduced co-payment, which is A$6.80 in 2022. The co-payments are compulsory and can be discounted by pharmacies up to a maximum of A$1.00 at cost to the pharmacy. United Kingdom In the United Kingdom, the Medicines Act 1968 and the Prescription Only Medicines (Human Use) Order 1997 contain regulations that cover the supply of sale, use, prescribing and production of medicines. There are three categories of medicine: Prescription-only medicines (POM), which may be dispensed (sold in the case of a private prescription) by a pharmacist only to those to whom they have been prescribed Pharmacy medicines (P), which may be sold by a pharmacist without a prescription General sales list (GSL) medicines, which may be sold without a prescription in any shop The simple possession of a prescription-only medicine without a prescription is legal unless it is covered by the Misuse of Drugs Act 1971. A patient visits a medical practitioner or dentist, who may prescribe drugs and certain other medical items, such as blood glucose-testing equipment for diabetics. Also, qualified and experienced nurses, paramedics and pharmacists may be independent prescribers. Both may prescribe all POMs (including controlled drugs), but may not prescribe Schedule 1 controlled drugs, and 3 listed controlled drugs for the treatment of addiction; which is similar to doctors, who require a special licence from the Home Office to prescribe schedule 1 drugs. Schedule 1 drugs have little or no medical benefit, hence their limitations on prescribing. District nurses and health visitors have had limited prescribing rights since the mid-1990s; until then, prescriptions for dressings and simple medicines had to be signed by a doctor. Once issued, a prescription is taken by the patient to a pharmacy, which dispenses the medicine. Most prescriptions are NHS prescriptions, subject to a standard charge that is unrelated to what is dispensed. The NHS prescription fee was increased to £9.90 for each item in England in May 2024; prescriptions are free of charge if prescribed and dispensed in Scotland, Wales and Northern Ireland, and for some patients in England, such as inpatients, children, those over 60s or with certain medical conditions, and claimants of certain benefits. The pharmacy charges the NHS the actual cost of the medicine, which may vary from a few pence to hundreds of pounds. A patient can consolidate prescription charges by using a prescription payment certificate (informally a "season ticket"), effectively capping costs at £31.25 a quarter or £111.60 for a year. Outside the NHS, private prescriptions are issued by private medical practitioner and sometimes under the NHS for medicines that are not covered by the NHS. A patient pays the pharmacy the normal price for medicine prescribed outside the NHS. Survey results published by Ipsos MORI in 2008 found that around 800,000 people in England were not collecting prescriptions or getting them dispensed because of the cost, the same as in 2001. United States In the United States, the Federal Food, Drug, and Cosmetic Act defines what substances, known as legend drugs, require a prescription for them to be dispensed by a pharmacy. The federal government authorizes physicians (of any specialty), physician assistants, nurse practitioners and other advanced practice nurses, veterinarians, dentists, and optometrists to prescribe any controlled substance. They are issued unique DEA numbers. Many other mental and physical health technicians, including basic-level registered nurses, medical assistants, emergency medical technicians, most psychologists, and social workers, are not authorized to prescribe legend drugs. The federal Controlled Substances Act (CSA) was enacted in 1970. It regulates manufacture, importation, possession, use, and distribution of controlled substances, which are drugs with potential for abuse or addiction. The legislation classifies these drugs into five schedules, with varying qualifications for each schedule. The schedules are designated schedule I, schedule II, schedule III, schedule IV, and schedule V. Many drugs other than controlled substances require a prescription. The safety and the effectiveness of prescription drugs in the US are regulated by the 1987 Prescription Drug Marketing Act (PDMA). The Food and Drug Administration (FDA) is charged with implementing the law. As a general rule, over-the-counter drugs (OTC) are used to treat a condition that does not need care from a healthcare professional if have been proven to meet higher safety standards for self-medication by patients. Often, a lower strength of a drug will be approved for OTC use, but higher strengths require a prescription to be obtained; a notable case is ibuprofen, which has been widely available as an OTC pain killer since the mid-1980s, but it is available by prescription in doses up to four times the OTC dose for severe pain that is not adequately controlled by the OTC strength. Herbal preparations, amino acids, vitamins, minerals, and other food supplements are regulated by the FDA as dietary supplements. Because specific health claims cannot be made, the consumer must make informed decisions when purchasing such products. By law, American pharmacies operated by "membership clubs" such as Costco and Sam's Club must allow non-members to use their pharmacy services and may not charge more for these services than they charge as their members. Physicians may legally prescribe drugs for uses other than those specified in the FDA approval, known as off-label use. Drug companies, however, are prohibited from marketing their drugs for off-label uses. Some prescription drugs are commonly abused, particularly those marketed as analgesics, including fentanyl (Duragesic), hydrocodone (Vicodin), oxycodone (OxyContin), oxymorphone (Opana), propoxyphene (Darvon), hydromorphone (Dilaudid), meperidine (Demerol), and diphenoxylate (Lomotil). Some prescription painkillers have been found to be addictive, and unintentional poisoning deaths in the United States have skyrocketed since the 1990s according to the National Safety Council. Prescriber education guidelines as well as patient education, prescription drug monitoring programs and regulation of pain clinics are regulatory tactics which have been used to curtail opioid use and misuse. Expiration date The expiration date, required in several countries, specifies the date up to which the manufacturer guarantees the full potency and safety of a drug. In the United States, expiration dates are determined by regulations established by the FDA. The FDA advises consumers not to use products after their expiration dates. A study conducted by the U.S. Food and Drug Administration covered over 100 drugs, prescription and over-the-counter. The results showed that about 90% of them were safe and effective far past their original expiration date. At least one drug worked 15 years after its expiration date. Joel Davis, a former FDA expiration-date compliance chief, said that with a handful of exceptions—notably nitroglycerin, insulin, and some liquid antibiotics (outdated tetracyclines can cause Fanconi syndrome)—most expired drugs are probably effective. The American Medical Association issued a report and statement on Pharmaceutical Expiration Dates. The Harvard Medical School Family Health Guide notes that, with rare exceptions, "it's true the effectiveness of a drug may decrease over time, but much of the original potency still remains even a decade after the expiration date". The expiration date is the final day that the manufacturer guarantees the full potency and safety of a medication. Drug expiration dates exist on most medication labels, including prescription, over-the-counter and dietary supplements. U.S. pharmaceutical manufacturers are required by law to place expiration dates on prescription products prior to marketing. For legal and liability reasons, manufacturers will not make recommendations about the stability of drugs past the original expiration date. Cost Prices of prescription drugs vary widely around the world. Prescription costs for biosimilar and generic drugs are usually less than brand names, but the cost is different from one pharmacy to another. To lower prescription drug costs, some U.S. states have sought federal approval to buy drugs in Canada, as of 2022. Generics undergo strict scrutiny to meet the equal efficacy, safety, dosage, strength, stability, and quality of brand name drugs. Generics are developed after the brand name has already been established, and so generic drug approval in many aspects has a shortened approval process because it replicates the brand name drug. Brand name drugs cost more due to time, money, and resources that drug companies invest in them to conduct development, including clinical trials that the FDA requires for the drug to be marketed. Because drug companies have to invest more in research costs to do this, brand name drug prices are much higher when sold to consumers. When the patent expires for a brand name drug, generic versions of that drug are produced by other companies and are sold for lower price. By switching to generic prescription drugs, patients can save significant amounts of money: e.g. one study by the FDA showed an example with more than 52% savings of a consumer's overall costs of their prescription drugs. Strategies to limit drug prices in the United States In the United States there are many resources available to patients to lower the costs of medication. These include copayments, coinsurance, and deductibles. The Medicaid Drug Rebate Program is another example. Generic drug programs lower the amount of money patients have to pay when picking up their prescription at the pharmacy. As their name implies, they only cover generic drugs. Co-pay assistance programs are programs that help patients lower the costs of specialty medications; i.e., medications that are on restricted formularies, have limited distribution, and/or have no generic version available. These medications can include drugs for HIV, hepatitis C, and multiple sclerosis. Patient Assistance Program Center (RxAssist) has a list of foundations that provide co-pay assistance programs. Co-pay assistance programs are for under-insured patients. Patients without insurance are not eligible for this resource; however, they may be eligible for patient assistance programs. Patient assistance programs are funded by the manufacturer of the medication. Patients can often apply to these programs through the manufacturer's website. This type of assistance program is one of the few options available to uninsured patients. The out-of-pocket cost for patients enrolled in co-pay assistance or patient assistance programs is $0. It is a major resource to help lower costs of medicationshowever, many providers and patients are not aware of these resources. Environment Traces of prescription drugsincluding antibiotics, anti-convulsants, mood stabilizers and sex hormoneshave been detected in drinking water. Pharmaceutically active compounds (PhACs) discarded from human therapy and their metabolites may not be eliminated entirely by sewage treatment plants and have been detected at low concentrations in surface waters downstream from those plants. The continuous discarding of incompletely treated water may interact with other environmental chemicals and lead to uncertain ecological effects. Due to most pharmaceuticals being highly soluble, fish and other aquatic organisms are susceptible to their effects. The long-term effects of pharmaceuticals in the environment may affect survival and reproduction of such organisms. However, levels of medical drug waste in the water is at a low enough level that it is not a direct concern to human health. However, processes, such as biomagnification, are potential human health concerns. On the other hand, there is clear evidence of harm to aquatic animals and fauna. Recent advancements in technology have allowed scientists to detect smaller, trace quantities of pharmaceuticals in the ng/ml range. Despite being found at low concentrations, female hormonal contraceptives may cause feminizing effects on male vertebrate species, such as fish, frogs and crocodiles. The FDA established guidelines in 2007 to inform consumers should dispose of prescription drugs. When medications do not include specific disposal instructions, patients should not flush medications in the toilet, but instead use medication take-back programs to reduce the amount of pharmaceutical waste in sewage and landfills. If no take-back programs are available, prescription drugs can be discarded in household trash after they are crushed or dissolved and then mixed in a separate container or sealable bag with undesirable substances like cat litter or other unappealing material (to discourage consumption).
Biology and health sciences
Drugs and pharmacology
null
359396
https://en.wikipedia.org/wiki/7z
7z
7z is a compressed archive file format that supports several different data compression, encryption and pre-processing algorithms. The 7z format initially appeared as implemented by the 7-Zip archiver. The 7-Zip program is publicly available under the terms of the GNU Lesser General Public License. The LZMA SDK 4.62 was placed in the public domain in December 2008. The latest stable version of 7-Zip and LZMA SDK is version 24.09. The 7z file format specification is distributed with 7-Zip's source code since 2015. The specification can be found in plain text format in the 'doc' sub-directory of the source code distribution. Features and enhancements The 7z format provides the following main features: Open, modular architecture that allows any compression, conversion, or encryption method to be stacked. High compression ratios (depending on the compression method used). AES-256 bit encryption. Zip 2.0 (Legacy) Encryption Large file support (up to approximately 16 exbibytes, or 264 bytes). Unicode file names. Support for solid compression, where multiple files of similar type are compressed within a single stream, in order to exploit the combined redundancy inherent in similar files. Compression and encryption of archive headers. Support for multi-part archives : e.g. xxx.7z.001, xxx.7z.002, ... (see the context menu items Split File... to create them and Combine Files... to re-assemble an archive from a set of multi-part component files). Support for custom codec plugin DLLs. The format's open architecture allows additional future compression methods to be added to the standard. Compression methods The following compression methods are currently defined: LZMA – A variation of the LZ77 algorithm, using a sliding dictionary up to 4 GB in length for duplicate string elimination. The LZ stage is followed by entropy coding using a Markov chain-based range coder and binary trees. LZMA2 – modified version of LZMA providing better multithreading support and less expansion of incompressible data. Bzip2 – The standard Burrows–Wheeler transform algorithm. Bzip2 uses two reversible transformations; BWT, then Move to front with Huffman coding for symbol reduction (the actual compression element). PPMd – Dmitry Shkarin's 2002 PPMdH (PPMII (Prediction by Partial matching with Information Inheritance) and cPPMII (complicated PPMII)) with small changes: PPMII is an improved version of the 1984 PPM compression algorithm (prediction by partial matching). DEFLATE – Standard algorithm based on 32 kB LZ77 and Huffman coding. Deflate is found in several file formats including ZIP, gzip, PNG and PDF. 7-Zip contains a from-scratch DEFLATE encoder that frequently beats the de facto standard zlib version in compression size, but at the expense of CPU usage. A suite of recompression tools called AdvanceCOMP contains a copy of the DEFLATE encoder from the 7-Zip implementation; these utilities can often be used to further compress the size of existing gzip, ZIP, PNG, or MNG files. Pre-processing filters The LZMA SDK comes with the BCJ and BCJ2 preprocessors included, so that later stages are able to achieve greater compression: For x86, ARM, PowerPC (PPC), IA-64 Itanium, and ARM Thumb processors, jump targets are 'normalized' before compression by changing relative position into absolute values. For x86, this means that near jumps, calls and conditional jumps (but not short jumps and conditional jumps) are converted from the machine language "jump 1655 bytes backwards" style notation to normalized "jump to address 5554" style notation; all jumps to 5554, perhaps a common subroutine, are thus encoded identically, making them more compressible. BCJ – Converter for 32-bit x86 executables. Normalise target addresses of near jumps and calls from relative distances to absolute destinations. BCJ2– Pre-processor for 32-bit x86 executables. BCJ2 is an improvement on BCJ, adding additional x86 jump/call instruction processing. Near jump, near call, conditional near jump targets are split out and compressed separately in another stream. Delta encoding – delta filter, basic preprocessor for multimedia data. Similar executable pre-processing technology is included in other software; the RAR compressor features displacement compression for 32-bit x86 executables and IA-64 executables, and the UPX runtime executable file compressor includes support for working with 16-bit values within DOS binary files. Encryption The 7z format supports encryption with the AES algorithm with a 256-bit key. The key is generated from a user-supplied passphrase using an algorithm based on the SHA-256 hash function. The SHA-256 is executed 219 (524288) times, which causes a significant delay on slow PCs before compression or extraction starts. This technique is called key stretching and is used to make a brute-force search for the passphrase more difficult. Current GPU-based, and custom hardware attacks limit the effectiveness of this particular method of key stretching, so it is still important to choose a strong password. The 7z format provides the option to encrypt the filenames of a 7z archive. Limitations The 7z format does not store filesystem permissions (such as UNIX owner/group permissions or NTFS ACLs), and hence can be inappropriate for backup/archival purposes. A workaround on UNIX-like systems for this is to convert data to a tar bitstream before compressing with 7z. But GNU tar (common in many UNIX environments) can also compress with the LZMA2 algorithm ("xz") natively, without the use of 7z, using the "-J" switch. The resulting file extension is ".tar.xz" or ".txz" and not ".tar.7z". This method of compression has been adopted with many distributions for packaging, such as Arch, Debian (deb), Fedora (rpm) and Slackware. (The older "lzma" format is less efficient.) On the other hand, it is important to note, that tar does not save the filesystem encoding, which means that tar compressed filenames can become unreadable if decompressed on a different computer. The 7z format does not allow extraction of some "broken files"—that is (for example) if one has the first segment of a series of 7z files, 7z cannot give the start of the files within the archive—it must wait until all segments are downloaded. The 7z format also lacks recovery records, making it vulnerable to data degradation unless used in conjunction with external solutions, like parchives, or within filesystems with robust error-correction. By way of comparison, zip files also lack a recovery feature while the rar format has one.
Technology
File formats
null
359657
https://en.wikipedia.org/wiki/Pollux%20%28star%29
Pollux (star)
Pollux is the brightest star in the constellation of Gemini. It has the Bayer designation β Geminorum, which is Latinised to Beta Geminorum and abbreviated Beta Gem or β Gem. This is an orange-hued, evolved red giant located at a distance of 34 light-years, making it the closest red giant (and giant star) to the Sun. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. In 2006 an exoplanet (designated Pollux b or β Geminorum b, later named Thestias) was announced to be orbiting it. Nomenclature β Geminorum (Latinised to Beta Geminorum) is the star's Bayer designation. The traditional name Pollux refers to the twins Castor and Pollux in Greek and Roman mythology. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Pollux for this star. Castor and Pollux are the two "heavenly twin" stars giving the constellation Gemini (Latin, 'the twins') its name. The stars, however, are quite different in detail. Castor is a complex sextuple system of hot, bluish-white type A stars and dim red dwarfs, while Pollux is a single, cooler yellow-orange giant. In Percy Shelley's 1818 poem Homer's Hymn to Castor and Pollux, the star is referred to as "... mild Pollux, void of blame." Originally the planet was designated Pollux b. In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Thestias for this planet. The winning name was based on that originally submitted by theSkyNet of Australia; namely Leda, Pollux's mother. At the request of the IAU, 'Thestias' (the patronym of Leda, a daughter of Thestius) was substituted. This was because 'Leda' was already attributed to an asteroid and to one of Jupiter's satellites. In the catalogue of stars in the Calendarium of al Achsasi al Mouakket, this star was designated Muekher al Dzira, which was translated into Latin as Posterior Brachii, meaning the end in the paw. In Chinese, (), meaning North River, refers to an asterism consisting of Pollux, ρ Geminorum, and Castor. Consequently, Pollux itself is known as (, .) Physical characteristics At an apparent visual magnitude of 1.14, Pollux is the brightest star in its constellation, even brighter than its neighbor Castor (α Geminorum). Pollux is 6.7 degrees north of the ecliptic, presently too far north to be occulted by the Moon. The last lunar occultation visible from Earth was on 30 September 116 BCE from high southern latitudes. Parallax measurements by the Hipparcos astrometry satellite place Pollux at a distance of about from the Sun. This is close to the standard unit for determining a star's absolute magnitude (a star's apparent magnitude as viewed from 10 parsecs). Hence, Pollux's apparent and absolute magnitudes are quite close. The star is larger than the Sun, with about two times its mass and almost nine times its radius. Once an A-type main-sequence star similar to Sirius, Pollux has exhausted the hydrogen at its core and evolved into a giant star with a stellar classification of K0 III. The effective temperature of this star's outer envelope is about , which lies in the range that produces the characteristic orange hue of K-type stars. Pollux has a projected rotational velocity of . The abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, is uncertain, with estimates ranging from 85% to 155% of the Sun's abundance. An old estimate for Pollux's diameter obtained in 1925 by John Stanley Plaskett via interferometry was 13 million miles (20.9 million km, or ), significantly larger than modern estimates. A more recent measurement by the Navy Precision Optical Interferometer give a radius of . Another estimate that uses Pollux's spectral lines obtained . Evidence for a low level of magnetic activity came from the detection of weak X-ray emission using the ROSAT orbiting telescope. The X-ray emission from this star is about 1027 erg s−1, which is roughly the same as the X-ray emission from the Sun. A magnetic field with a strength below 1 gauss has since been confirmed on the surface of Pollux; one of the weakest fields ever detected on a star. The presence of this field suggests that Pollux was once an Ap star with a much stronger magnetic field. The star displays small amplitude radial velocity variations, but is not photometrically variable. Planetary system Since 1993 scientists have suspected an exoplanet orbiting Pollux, from measured radial velocity oscillations. The existence of the planet, Pollux b, was confirmed and announced on June 16, 2006. Pollux b is calculated to have a mass at least 2.3 times that of Jupiter. The planet is orbiting Pollux with a period of about 590 days. The existence of Pollux b has been disputed; the possibility that the observed radial velocity variations are caused by stellar magnetic activity cannot be ruled out.
Physical sciences
Notable stars
Astronomy
360310
https://en.wikipedia.org/wiki/Reaper
Reaper
A reaper is a farm implement that reaps (cuts and often also gathers) crops at harvest when they are ripe. Usually the crop involved is a cereal grass, especially wheat. The first documented reaping machines were Gallic reapers that were used in Roman times in what would become modern-day France. The Gallic reaper involved a comb which collected the heads, with an operator knocking the grain into a box for later threshing. Most modern mechanical reapers cut grass; most also gather it, either by windrowing or picking it up. Modern machines that not only cut and gather the grass but also thresh its seeds (the grain), winnow the grain, and deliver it to a truck or wagon, called combine harvesters or simply combines, which are the engineering descendants of earlier reapers. Hay is harvested somewhat differently from grain; in modern haymaking, the machine that cuts the grass is called a hay mower or, if integrated with a conditioner, a mower-conditioner. As a manual task, cutting of both grain and hay may be called reaping, involving scythes, sickles, and cradles, followed by differing downstream steps. Traditionally all such cutting could be called reaping, although a distinction between reaping of grain grasses and mowing of hay grasses has long existed; it was only after a decade of attempts at combined grain reaper/hay mower machines (1830s to 1840s) that designers of mechanical implements began resigning them to separate classes. Mechanical reapers substantially changed agriculture from their appearance in the 1830s until the 1860s through 1880s, when they evolved into related machines, often called by different names (self-raking reaper, harvester, reaper-binder, grain binder, binder), that collected and bound the sheaves of grain with wire or twine. Hand reaping Hand reaping is done by various means, including plucking the ears of grains directly by hand, cutting the grain stalks with a sickle, cutting them with a scythe, or a scythe fitted with a grain cradle. Reaping is usually distinguished from mowing, which uses similar implements, but is the traditional term for cutting grass for hay, rather than reaping cereals. The stiffer, dryer straw of the cereal plants and the greener grasses for hay usually demand different blades on the machines. <ref>Gould P. Colman, "Innovation and Diffusion in Agriculture," Agricultural History" (1968) 42#3 pp,173-187.</ref> The reaped grain stalks are gathered into sheaves (bunches), tied with string or with a twist of straw. Several sheaves are then leant against each other with the ears off the ground to dry out, forming a stook. After drying, the sheaves are gathered from the field and stacked, being placed with the ears inwards, then covered with thatch or a tarpaulin; this is called a stack or rick. In the British Isles a rick of sheaves is traditionally called a corn rick, to distinguish it from a hay rick ("corn" in British English retains its older sense of "grain" generally, not "maize"). Ricks are made in an area inaccessible to livestock, called a rick-yard or stack-yard. The corn-rick is later broken down and the sheaves threshed to separate the grain from the straw. Collecting spilt grain from the field after reaping is called gleaning, and is traditionally done either by hand, or by penning animals such as chickens or pigs onto the field. Hand reaping is now rarely done in industrialized countries, but is still the normal method where machines are unavailable or where access for them is limited (such as on narrow terraces). Mechanical reaping A mechanical reaper or reaping machine is a mechanical, semi-automated device that harvests crops. Mechanical reapers and their descendant machines have been an important part of mechanized agriculture and a main feature of agricultural productivity. Mechanical reapers in the U.S. The 19th century saw several inventors in the United States claim innovation in mechanical reapers. The various designs competed with each other, and were the subject of several lawsuits. Obed Hussey in Ohio patented a reaper in 1833, the Hussey Reaper. Made in Baltimore, Maryland, Hussey's design was a major improvement in reaping efficiency. The new reaper only required two horses working in a non-strenuous manner, a man to work the machine, and another person to drive. In addition, the Hussey Reaper left an even and clean surface after its use. The McCormick Reaper was designed by Robert McCormick in Walnut Grove, Virginia. However, Robert became frustrated when he was unable to perfect his new device. His son Cyrus asked for permission to try to complete his father's project. With permission granted, the McCormick Reaper was patented by his son Cyrus McCormick in 1834 as a horse-drawn farm implement to cut small grain crops. This McCormick reaper machine had several special elements: a main wheel frame projected to the side a platform containing a cutter bar having fingers through which reciprocated a knife driven by a crank upon the outer end of the platform was a divider projecting ahead of the platform to separate the grain to be cut from that to be left standing a reel was positioned above the platform to hold the grain against the reciprocating knife to throw it back upon the platform the machine was drawn by a team walking at the side of the grain. Cyrus McCormick claimed that his reaper was actually invented in 1831, giving him the true claim to the general design of the machine. Over the next few decades the Hussey and McCormick reapers would compete with each other in the marketplace, despite being quite similar. By the 1850s, the original patents of both Hussey and McCormick had expired and many other manufacturers put similar machines on the market. In 1861, the United States Patent and Trademark Office issued a ruling on the invention of the polarizing reaper design. It was determined that the money made from reapers was in large part due to Obed Hussey. S. T. Shubert, the acting commissioner of patents, declared that Hussey's improvements were the foundation of their success. It was ruled that the heirs of Obed Hussey would be monetarily compensated for his hard work and innovation by those who had made money from the reaper. It was also ruled that McCormick's reaper patent would be renewed for another seven years. Although the McCormick reaper was a revolutionary innovation for the harvesting of crops, it did not experience mainstream success and acceptance until at least 20 years after it was patented by Cyrus McCormick. This was because the McCormick reaper lacked a quality unique to Obed Hussey's reaper. Hussey's reaper used a sawlike cutter bar that cut stalks far more effectively than McCormick's. Only once Cyrus McCormick was able to acquire the rights to Hussey's cutter-bar mechanism (around 1850) did a truly revolutionary machine emerge. Other factors in the gradual uptake of mechanized reaping included natural cultural conservatism among farmers (proven tradition versus new and unknown machinery); the poor state of many new farm fields, which were often littered with rocks, stumps, and areas of uneven soil, making the lifespan and operability of a reaping machine questionable; and some amount of fearful Luddism among farmers that the machine would take away jobs, most especially among hired manual labourers. Another strong competitor in the industry was the Manny Reaper by John Henry Manny and the companies that succeeded him. Even though McCormick has sometimes been simplistically credited as the [sole] "inventor" of the mechanical reaper, a more accurate statement is that he independently reinvented aspects of it, created a crucial original integration of enough aspects to make a successful whole, and benefited from the influence of more than two decades of work by his father, as well as the aid of Jo Anderson, a slave held by his family. Reapers in the late 19th and 20th century After the first reapers were developed and patented, other slightly different reapers were distributed by several manufacturers throughout the world. The Champion (Combined) Reapers and Mowers, produced by the Champion Interest]group (Champion Machine Company, later Warder, Bushnell & Glessner, absorbed in IHC 1902) in Springfield, Ohio in the second half of the 19th century, were highly successful in the 1880s in the United States. Springfield is still known as "The Champion City". Generally, reapers developed into the 1872 invented reaper-binder, which reaped the crop and bound it into sheaves. By 1896, 400,000 reaper-binders were estimated to be harvesting grain. This was in turn replaced by the swather and eventually the combine harvester, which reaps and threshes in one operation. In Central European agriculture reapers were – together with reaper-binders – common machines until the mid-20th century.
Technology
Farm and garden machinery
null
360939
https://en.wikipedia.org/wiki/Hickory
Hickory
Hickory is a common name for trees composing the genus Carya, which includes 19 species accepted by Plants of the World Online. Seven species are native to southeast Asia in China, Indochina, and northeastern India (Assam), and twelve are native to North America. A number of hickory species are used for their edible nuts or for their wood. Etymology The name "hickory" derives from a Native American word in an Algonquian language (perhaps Powhatan). It is a shortening of pockerchicory, pocohicora, or a similar word, which may be the name for the hickory tree's nut, or may be a milky drink made from such nuts. The genus name Carya is , káryon, meaning "nut". Description Hickories are temperate to subtropical forest trees with pinnately compound leaves and large nuts. Most are deciduous, but one species (C. sinensis, syn. Annamocarya sinensis) in southeast Asia is evergreen. Hickory flowers are small, yellow-green catkins produced in spring. They are wind-pollinated and self-incompatible. The fruit is a globose or oval nut, long and diameter, enclosed in a four-valved husk, which splits open at maturity. The nut shell is thick and bony in most species, but thin in a few, notably the pecan (C. illinoinensis); it is divided into two halves, which split apart when the seed germinates. Some fruit are borderline and difficult to categorize. Hickory (Carya) nuts and walnut (Juglans) nuts, both in the family Juglandaceae, grow within an outer husk; these fruit are sometimes considered to be drupes or drupaceous nuts, rather than true botanical nuts. "Tryma" is a specialized term for such nut-like drupes. The Angiosperm Phylogeny Group, however, considers the fruit to be a nut. Taxonomy Phylogeny The oldest fossils attributed to Carya are Cretaceous pollen grains from Mexico and New Mexico. Fossil and molecular data suggest the genus Carya may have diversified during the Miocene. Modern Carya first appear in Oligocene strata 34 million years ago. Recent discoveries of Carya fruit fossils further support the hypothesis that the genus has long been a member of Eastern North American landscapes, however its range has contracted and Carya is no longer extant west of the Rocky Mountains. Fossils of early hickory nuts show simpler, thinner shells than modern species with the exception of pecans, suggesting that the trees gradually developed defenses to rodent seed predation. During this time, the genus had a distribution across the Northern Hemisphere, but the Pleistocene Ice Age beginning 2 million years ago obliterated it from Europe. In Anatolia, the genus appears to have disappeared only in the early Holocene, probably related to human disturbance. The distribution of Carya in North America also contracted and it completely disappeared from the continent west of the Rocky Mountains. It is likely that the genus originated in North America, and later spread to Europe and Asia. Subdivision The genus Carya (not to be confused with Careya in the Lecythidaceae) is in the walnut family, Juglandaceae. In the APG system, this family is included in the order Fagales. Several species are known to hybridize, with around nine accepted, named hybrids. Asian hickories Carya sect. Sinocarya Carya dabieshanensis M.C. Liu – Dabie Shan hickory (may be synonymous with C. cathayensis) Carya cathayensis Sarg. – Chinese hickory Carya hunanensis W.C.Cheng & R.H.Chang – Hunan hickory Carya kweichowensis Kuang & A.M.Lu – Guizhou hickory Carya poilanei Leroy – Poilane's hickory Carya sinensis Dode – Beaked hickory Carya tonkinensis Lecomte – Vietnamese hickory C. sinensis has sometimes been split out in a separate genus as Annamocarya sinensis, but not by Plants of the World Online, as genetic data support it being embedded within the other Asian Carya. North American hickories Carya sect. Carya – typical hickories Carya floridana Sarg. – scrub hickory Carya glabra (Mill.) Sweet – pignut hickory, pignut, sweet pignut, coast pignut hickory, smoothbark hickory, swamp hickory, broom hickory Carya laciniosa (Mill.) K.Koch – shellbark hickory, shagbark hickory, bigleaf shagbark hickory, kingnut, big shellbark, bottom shellbark, thick shellbark, western shellbark Carya myristiciformis (F.Michx.) Nutt. – nutmeg hickory, swamp hickory, bitter water hickory Carya ovalis (Wangenh.) Sarg. – red hickory, spicebark hickory, sweet pignut hickory (treated as a variety of C. glabra by Flora N. Amer. and Plants of the World Online) Carya ovata (Mill.) K.Koch – shagbark hickory C. o. var. ovata – northern shagbark hickory C. o. var. australis – southern shagbark hickory, Carolina hickory (syn. C. carolinae-septentrionalis) Carya pallida (Ashe) Engl. & Graebn. – sand hickory Carya texana Buckley – black hickory Carya tomentosa (Poir.) Nutt. – mockernut hickory (syn. C. alba) †Carya washingtonensis Manchester – Miocene of Kittitas County, Washington Carya sect. Apocarya – pecans Carya aquatica (F.Michx.) Nutt. – bitter pecan or water hickory Carya cordiformis (Wangenh.) K.Koch – bitternut hickory Carya illinoinensis (Wangenh.) K.Koch – pecan Carya palmeri W.E. Manning – Mexican hickory Distribution and habitat Seven species are native to southeast Asia in China, Indochina, and northeastern India (Assam), and twelve are native to North America, of which eleven occur in the United States, four in Mexico (of which one, C. palmeri, endemic there), and five extending into southern Canada. Ecology Hickory is used as a food plant by the larvae of some Lepidoptera species. These include: Luna moth (Actias luna) Brown-tail moth (Euproctis chrysorrhoea) Coleophora case-bearers, C. laticornella and C. ostryae Regal moths (Citheronia regalis), whose caterpillars are known as hickory horn-devils Walnut sphinx (Amorpha juglandis) The bride (nominate subspecies Catocala neogama neogama) Hickory tussock moth (Lophocampa caryae) The hickory leaf stem gall phylloxera (Phylloxera caryaecaulis) also uses the hickory tree as a food source. Phylloxeridae are related to aphids and have a similarly complex life cycle. Eggs hatch in early spring and the galls quickly form around the developing insects. Phylloxera galls may damage weakened or stressed hickories, but are generally harmless. Deformed leaves and twigs can rain down from the tree in the spring as squirrels break off infected tissue and eat the galls, possibly for the protein content or because the galls are fleshy and tasty to the squirrels. The pecan gall curculio (Conotrachelus elegans) is a true weevil species also found feeding on galls of the hickory leaf stem gall phylloxera. The banded hickory borer (Knulliana cincta) is also found on hickories. Uses Nutrition Dried hickory nuts are 3% water, 18% carbohydrates, 13% protein, and 64% fats. In a 100 gram (3.5 oz) reference amount, dried hickory nuts supply 657 calories, and are a rich source (20% or more of the Daily Value, DV) of several B vitamins and dietary minerals, especially manganese at 220% DV. Culinary An extract from shagbark hickory bark is used in an edible syrup similar to maple syrup, with a slightly bitter, smoky taste. The Cherokee people would produce a green dye from hickory bark, which they used to dye cloth. When this bark was mixed with maple bark, it produced a yellow dye pigment. The ashes of burnt hickory wood were traditionally used to produce a strong lye (potash) fit for soapmaking. The nuts of some species are palatable, while others are bitter and only suitable for animal feed. Hickory nuts were a significant food source for indigenous peoples of the Eastern Woodlands of North America since the middle Archaic period. They were used by the Cherokee in Kanuchi soup, but more often edible oil would be extracted through crushing the nuts and then either straining or boiling the remains. Shagbark and shellbark hickory, along with pecan, are regarded by some as the finest nut trees. Pecans are the most important nut tree native to North America. When cultivated for their nuts, clonal (grafted) trees of the same cultivar cannot pollinate each other because of their self-incompatibility. Two or more cultivars must be planted together for successful pollination. Seedlings (grown from hickory nuts) will usually have sufficient genetic variation. Wood Hickory wood is hard, stiff, dense and shock resistant. There are woods stronger than hickory and woods that are harder, but the combination of strength, toughness, hardness, and stiffness found in hickory wood is not found in any other commercial wood. Hickory is therefore used in a number of items requiring these properties, such as tool handles, bows, wheel spokes, walking sticks, drumsticks and wood flooring. Baseball bats were formerly made of hickory, but are now more commonly made of ash; however, it is replacing ash as the wood of choice for Scottish shinty sticks. Hickory was also extensively used for the construction of early aircraft. Due to its grain structure, hickory is more susceptible to moisture absorption than other species of wood, and is therefore more prone to shrinkage, warping or swelling with changes in humidity. Hickory is also highly prized for wood-burning stoves and chimineas, as its density and high energy content make it an efficient fuel. Hickory wood is also a preferred type for smoking cured meats. In the Southern United States, hickory is popular for cooking barbecue, as hickory grows abundantly in the region and adds flavor to the meat. Gallery
Biology and health sciences
Fagales
null
361021
https://en.wikipedia.org/wiki/Electrolytic%20cell
Electrolytic cell
An electrolytic cell is an electrochemical cell that utilizes an external source of electrical energy to force a chemical reaction that would otherwise not occur. The external energy source is a voltage applied between the cell's two electrodes; an anode (positively charged electrode) and a cathode (negatively charged electrode), which are immersed in an electrolyte solution. This is in contrast to a galvanic cell, which itself is a source of electrical energy and the foundation of a battery. The net reaction taking place in an electrolytic cell is a non-spontaneous reaction(reverse of a spontaneous reaction), i.e., the Gibbs free energy is +ve, while the net reaction taking place in a galvanic cell is a spontaneous reaction, i.e., the Gibbs free energy is - ve. Principles In an electrolytic cell, a current passes through the cell by an external voltage, causing a non-spontaneous chemical reaction to proceed. In a galvanic cell, the progress of a spontaneous chemical reaction causes an electric current to flow. An equilibrium electrochemical cell exists in the state between an electrolytic cell and a galvanic cell. The tendency of a spontaneous reaction to push a current through the external circuit is exactly balanced by a counter-electromotive force so that no current flows. If this counter-electromotive force is increased, the cell becomes an electrolytic cell, and if it is decreased, the cell becomes a galvanic cell. An electrolytic cell has three components: an electrolyte and two electrodes (a cathode and an anode). The electrolyte is usually a solution of water or other solvents in which ions are dissolved. Molten salts such as sodium chloride can also function as electrolytes. When driven by an external voltage applied to the electrodes, the ions in the electrolyte are attracted to an electrode with the opposite charge, where charge-transferring (also called faradaic or redox) reactions can take place. Only with an external electrical potential (i.e., voltage) of correct polarity and sufficient magnitude can an electrolytic cell decompose a normally stable, or inert chemical compound in the solution. The electrical energy provided can produce a chemical reaction that would otherwise not occur spontaneously. Michael Faraday defined the cathode of a cell as the electrode to which cations (positively charged ions, such as silver ions Ag) flow within the cell, to be reduced by reacting with electrons (negatively charged) from that electrode. Likewise, he defined the anode as the electrode to which anions (negatively charged ions, like chloride ions Cl) flow within the cell, to be oxidized by depositing electrons on the electrode. To an external wire connected to the electrodes of a galvanic cell (or battery), forming an electric circuit, the cathode is positive and the anode is negative. Thus positive electric current flows from the cathode to the anode through the external circuit in the case of a galvanic cell. Applications Electrolytic cells are often used to decompose chemical compounds, in a process called electrolysis—with electro meaning electricity and the Greek word lysis means to break up. Important examples of electrolysis are the decomposition of water into hydrogen and oxygen, and bauxite into aluminum and other chemicals. Electroplating (e.g., of copper, silver, nickel, or chromium) is done using an electrolytic cell. Electrolysis is a technique that uses a direct electric current (DC). Commercially, electrolytic cells are used in the electrorefining and electrowinning of several non-ferrous metals. Most high-purity aluminum, copper, zinc, and lead are produced industrially in electrolytic cells. As already noted, water, particularly when ions are added (saltwater or acidic water), can be electrolyzed (subjected to electrolysis). When driven by an external source of voltage, hydrogen (H) ions flow to the cathode to combine with electrons to produce hydrogen gas in a reduction reaction. Likewise, hydroxide (OH) ions flow to the anode to release electrons and a hydrogen (H) ion to produce oxygen gas in an oxidation reaction. In molten sodium chloride (NaCl), when a current is passed through the salt the anode oxidizes chloride ions (Cl) to chlorine gas, it releases electrons to the anode. Likewise, the cathode reduces sodium ions (Na), which accepts electrons from the cathode and deposits them on the cathode as sodium metal. Sodium chloride that has been dissolved in water can also be electrolyzed. The anode oxidizes the chloride ions (Cl), and produces chlorine (Cl2) gas. However, at the cathode, instead of sodium ions being reduced to sodium metal, water molecules are reduced to hydroxide ions (OH) and hydrogen gas (H2). The overall result of the electrolysis is the production of chlorine gas, hydrogen gas, and aqueous sodium hydroxide (NaOH) solution.
Physical sciences
Electrochemistry
Chemistry
361038
https://en.wikipedia.org/wiki/Chemical%20polarity
Chemical polarity
In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end. Polar molecules must contain one or more polar bonds due to a difference in electronegativity between the bonded atoms. Molecules containing polar bonds have no molecular polarity if the bond dipoles cancel each other out by symmetry. Polar molecules interact through dipole-dipole intermolecular forces and hydrogen bonds. Polarity underlies a number of physical properties including surface tension, solubility, and melting and boiling points. Polarity of bonds Not all atoms attract electrons with the same force. The amount of "pull" an atom exerts on its electrons is called its electronegativity. Atoms with high electronegativitiessuch as fluorine, oxygen, and nitrogenexert a greater pull on electrons than atoms with lower electronegativities such as alkali metals and alkaline earth metals. In a bond, this leads to unequal sharing of electrons between the atoms, as electrons will be drawn closer to the atom with the higher electronegativity. Because electrons have a negative charge, the unequal sharing of electrons within a bond leads to the formation of an electric dipole: a separation of positive and negative electric charge. Because the amount of charge separated in such dipoles is usually smaller than a fundamental charge, they are called partial charges, denoted as δ+ (delta plus) and δ− (delta minus). These symbols were introduced by Sir Christopher Ingold and Edith Hilda (Usherwood) Ingold in 1926. The bond dipole moment is calculated by multiplying the amount of charge separated and the distance between the charges. These dipoles within molecules can interact with dipoles in other molecules, creating dipole-dipole intermolecular forces. Classification Bonds can fall between one of two extremescompletely nonpolar or completely polar. A completely nonpolar bond occurs when the electronegativities are identical and therefore possess a difference of zero. A completely polar bond is more correctly called an ionic bond, and occurs when the difference between electronegativities is large enough that one atom actually takes an electron from the other. The terms "polar" and "nonpolar" are usually applied to covalent bonds, that is, bonds where the polarity is not complete. To determine the polarity of a covalent bond using numerical means, the difference between the electronegativity of the atoms is used. Bond polarity is typically divided into three groups that are loosely based on the difference in electronegativity between the two bonded atoms. According to the Pauling scale: Nonpolar bonds generally occur when the difference in electronegativity between the two atoms is less than 0.5 Polar bonds generally occur when the difference in electronegativity between the two atoms is roughly between 0.5 and 2.0 Ionic bonds generally occur when the difference in electronegativity between the two atoms is greater than 2.0 Pauling based this classification scheme on the partial ionic character of a bond, which is an approximate function of the difference in electronegativity between the two bonded atoms. He estimated that a difference of 1.7 corresponds to 50% ionic character, so that a greater difference corresponds to a bond which is predominantly ionic. As a quantum-mechanical description, Pauling proposed that the wave function for a polar molecule AB is a linear combination of wave functions for covalent and ionic molecules: ψ = aψ(A:B) + bψ(A+B−). The amount of covalent and ionic character depends on the values of the squared coefficients a2 and b2. Bond dipole moments The bond dipole moment uses the idea of electric dipole moment to measure the polarity of a chemical bond within a molecule. It occurs whenever there is a separation of positive and negative charges. The bond dipole μ is given by: . The bond dipole is modeled as δ+ — δ– with a distance d between the partial charges δ+ and δ–. It is a vector, parallel to the bond axis, pointing from minus to plus, as is conventional for electric dipole moment vectors. Chemists often draw the vector pointing from plus to minus. This vector can be physically interpreted as the movement undergone by electrons when the two atoms are placed a distance d apart and allowed to interact, the electrons will move from their free state positions to be localised more around the more electronegative atom. The SI unit for electric dipole moment is the coulomb–meter. This is too large to be practical on the molecular scale. Bond dipole moments are commonly measured in debyes, represented by the symbol D, which is obtained by measuring the charge in units of 10−10 statcoulomb and the distance d in Angstroms. Based on the conversion factor of 10−10 statcoulomb being 0.208 units of elementary charge, so 1.0 debye results from an electron and a proton separated by 0.208 Å. A useful conversion factor is 1 D = 3.335 64 C m. For diatomic molecules there is only one (single or multiple) bond so the bond dipole moment is the molecular dipole moment, with typical values in the range of 0 to 11 D. At one extreme, a symmetrical molecule such as bromine, , has zero dipole moment, while near the other extreme, gas phase potassium bromide, KBr, which is highly ionic, has a dipole moment of 10.41 D. For polyatomic molecules, there is more than one bond. The total molecular dipole moment may be approximated as the vector sum of the individual bond dipole moments. Often bond dipoles are obtained by the reverse process: a known total dipole of a molecule can be decomposed into bond dipoles. This is done to transfer bond dipole moments to molecules that have the same bonds, but for which the total dipole moment is not yet known. The vector sum of the transferred bond dipoles gives an estimate for the total (unknown) dipole of the molecule. Polarity of molecules A molecule is composed of one or more chemical bonds between molecular orbitals of different atoms. A molecule may be polar either as a result of polar bonds due to differences in electronegativity as described above, or as a result of an asymmetric arrangement of nonpolar covalent bonds and non-bonding pairs of electrons known as a full molecular orbital. While the molecules can be described as "polar covalent", "nonpolar covalent", or "ionic", this is often a relative term, with one molecule simply being more polar or more nonpolar than another. However, the following properties are typical of such molecules. Boiling point When comparing a polar and nonpolar molecule with similar molar masses, the polar molecule in general has a higher boiling point, because the dipole–dipole interaction between polar molecules results in stronger intermolecular attractions. One common form of polar interaction is the hydrogen bond, which is also known as the H-bond. For example, water forms H-bonds and has a molar mass M = 18 and a boiling point of +100 °C, compared to nonpolar methane with M = 16 and a boiling point of –161 °C. Solubility Due to the polar nature of the water molecule itself, other polar molecules are generally able to dissolve in water. Most nonpolar molecules are water-insoluble (hydrophobic) at room temperature. Many nonpolar organic solvents, such as turpentine, are able to dissolve nonpolar substances. Surface tension Polar compounds tend to have higher surface tension than nonpolar compounds. Capillary action Polar liquids have a tendency to rise against gravity in a small diameter tube. Viscosity Polar liquids have a tendency to be more viscous than nonpolar liquids. For example, nonpolar hexane is much less viscous than polar water. However, molecule size is a much stronger factor on viscosity than polarity, where compounds with larger molecules are more viscous than compounds with smaller molecules. Thus, water (small polar molecules) is less viscous than hexadecane (large nonpolar molecules). Examples Polar molecules A polar molecule has a net dipole as a result of the opposing charges (i.e. having partial positive and partial negative charges) from polar bonds arranged asymmetrically. Water (H2O) is an example of a polar molecule since it has a slight positive charge on one side and a slight negative charge on the other. The dipoles do not cancel out, resulting in a net dipole. The dipole moment of water depends on its state. In the gas phase the dipole moment is ≈ 1.86 debye (D), whereas liquid water (≈ 2.95 D) and ice (≈ 3.09 D) are higher due to differing hydrogen-bonded environments. Other examples include sugars (like sucrose), which have many polar oxygen–hydrogen (−OH) groups and are overall highly polar. If the bond dipole moments of the molecule do not cancel, the molecule is polar. For example, the water molecule (H2O) contains two polar O−H bonds in a bent (nonlinear) geometry. The bond dipole moments do not cancel, so that the molecule forms a molecular dipole with its negative pole at the oxygen and its positive pole midway between the two hydrogen atoms. In the figure each bond joins the central O atom with a negative charge (red) to an H atom with a positive charge (blue). The hydrogen fluoride, HF, molecule is polar by virtue of polar covalent bondsin the covalent bond electrons are displaced toward the more electronegative fluorine atom. Ammonia, NH3, is a molecule whose three N−H bonds have only a slight polarity (toward the more electronegative nitrogen atom). The molecule has two lone electrons in an orbital that points towards the fourth apex of an approximately regular tetrahedron, as predicted by the VSEPR theory. This orbital is not participating in covalent bonding; it is electron-rich, which results in a powerful dipole across the whole ammonia molecule. In ozone (O3) molecules, the two O−O bonds are nonpolar (there is no electronegativity difference between atoms of the same element). However, the distribution of other electrons is unevensince the central atom has to share electrons with two other atoms, but each of the outer atoms has to share electrons with only one other atom, the central atom is more deprived of electrons than the others (the central atom has a formal charge of +1, while the outer atoms each have a formal charge of −). Since the molecule has a bent geometry, the result is a dipole across the whole ozone molecule. Nonpolar molecules A molecule may be nonpolar either when there is an equal sharing of electrons between the two atoms of a diatomic molecule or because of the symmetrical arrangement of polar bonds in a more complex molecule. For example, boron trifluoride (BF3) has a trigonal planar arrangement of three polar bonds at 120°. This results in no overall dipole in the molecule. Carbon dioxide (CO2) has two polar C=O bonds, but the geometry of CO2 is linear so that the two bond dipole moments cancel and there is no net molecular dipole moment; the molecule is nonpolar. Examples of household nonpolar compounds include fats, oil, and petrol/gasoline. In the methane molecule (CH4) the four C−H bonds are arranged tetrahedrally around the carbon atom. Each bond has polarity (though not very strong). The bonds are arranged symmetrically so there is no overall dipole in the molecule. The diatomic oxygen molecule (O2) does not have polarity in the covalent bond because of equal electronegativity, hence there is no polarity in the molecule. Amphiphilic molecules Large molecules that have one end with polar groups attached and another end with nonpolar groups are described as amphiphiles or amphiphilic molecules. They are good surfactants and can aid in the formation of stable emulsions, or blends, of water and fats. Surfactants reduce the interfacial tension between oil and water by adsorbing at the liquid–liquid interface. Predicting molecule polarity Determining the point group is a useful way to predict polarity of a molecule. In general, a molecule will not possess dipole moment if the individual bond dipole moments of the molecule cancel each other out. This is because dipole moments are euclidean vector quantities with magnitude and direction, and a two equal vectors that oppose each other will cancel out. Any molecule with a centre of inversion ("i") or a horizontal mirror plane ("σh") will not possess dipole moments. Likewise, a molecule with more than one Cn axis of rotation will not possess a dipole moment because dipole moments cannot lie in more than one dimension. As a consequence of that constraint, all molecules with dihedral symmetry (Dn) will not have a dipole moment because, by definition, D point groups have two or multiple Cn axes. Since C1, Cs,C∞h Cn and Cnv point groups do not have a centre of inversion, horizontal mirror planes or multiple Cn axis, molecules in one of those point groups will have dipole moment. Electrical deflection of water Contrary to popular misconception, the electrical deflection of a stream of water from a charged object is not based on polarity. The deflection occurs because of electrically charged droplets in the stream, which the charged object induces. A stream of water can also be deflected in a uniform electrical field, which cannot exert force on polar molecules. Additionally, after a stream of water is grounded, it can no longer be deflected. Weak deflection is even possible for nonpolar liquids.
Physical sciences
Supramolecular chemistry
Chemistry
361103
https://en.wikipedia.org/wiki/Honey%20badger
Honey badger
The honey badger (Mellivora capensis), also known as the ratel ( or ), is a mammal widely distributed across Africa, Southwest Asia, and the Indian subcontinent. It is the only living species in both the genus Mellivora and the subfamily Mellivorinae. It has a fairly long body, with a distinctly thick-set and broad back, and remarkably loose skin, allowing the badger to turn and twist freely within it. The largest terrestrial mustelid in Africa, the honey badger measures long and weighs up to . Sexual dimorphism has been recorded in this species, with males being larger and heavier than females. There are two pairs of mammae, and an anal pouch which, unusual among mustelids, is eversible, a trait shared with hyenas and mongooses. The honey badger is a solitary animal that can be active at any time of day, depending on the location. It is primarily a carnivorous species and has few natural predators because of its thick skin, strength and ferocious defensive abilities. Adults maintain large home ranges, and display scent-marking behavior. The species has no fixed breeding period. After a gestation of 50–70 days, a female will give birth to an average of one two cubs that will remain under her care for 1–1¼ years. Because of its wide range and occurrence in a variety of habitats, it is listed as Least Concern on the IUCN Red List. In popular media, the honey badger is best known as an aggressive, intelligent animal that is fearless and tough in nature Taxonomy Viverra capensis was the scientific name used by Johann Christian Daniel von Schreber in 1777 who described a honey badger skin from the Cape of Good Hope. Mellivorae was proposed as name for the genus by Gottlieb Conrad Christian Storr in 1780, while Mellivorina was proposed as a tribe name by John Edward Gray in 1865. The honey badger is the only species of the genus Mellivora. Although in the 1860s it was assigned to the badger subfamily, the Melinae, it is now generally agreed that it bears few similarities to the Melinae. It is much more closely related to the marten subfamily, Guloninae, and furthermore is assigned its own subfamily, Mellivorinae. The genus name, Mellivora, is derived from Latin, meaning "honey eater", while the species name, capensis, pertains to the location where the type specimen was discovered: the Cape of Good Hope. The origin of the word ratel is uncertain, but is thought to either be derived from ratel, which is Dutch for rattle or from the Dutch word raat, meaning honeycomb. Evolution The species first appeared during the middle Pliocene in Asia. A number of extinct relatives are known dating back at least 7 million years to the Late Miocene. These include Mellivora benfieldi from South Africa and Italy, Promellivora from Pakistan, and Howellictis from Chad. More distant relatives include Eomellivora, which evolved into several different species in both the Old and New World, and the giant, long-legged Ekorus from Kenya. Subspecies In the 19th and 20th centuries, 16 zoological specimens of the honey badger were described and proposed as subspecies. Points taken into consideration in assigning different subspecies include size and the extent of whiteness or greyness on the back. , 12 subspecies are recognised as valid taxa: Description The honey badger has a fairly long body, but is distinctly thick-set and broad across the back. Its skin is remarkably loose, and allows the animal to turn and twist freely within it. The skin around the neck is thick, an adaptation to fighting conspecifics. The head is small and flat, with a short muzzle. The eyes are small, and the ears are little more than ridges on the skin, another possible adaptation to avoiding damage while fighting. The honey badger has short and sturdy legs, with five toes on each foot. The feet are armed with very strong claws, which are short on the hind legs and remarkably long on the forelimbs. It is a partially plantigrade animal whose soles are thickly padded and naked up to the wrists. The tail is short and is covered in long hairs, save for below the base. The honey badger is the largest terrestrial mustelid in Africa. Adults measure in shoulder height and in body length, with the tail adding another . Females are smaller than males. In Africa, males weigh while females weigh on average. The mean weight of adult honey badgers from different areas has been reported at anywhere between , with a median of roughly , per various studies. This positions it as the third largest known badger, after the European badger and hog badger, and fourth largest extant terrestrial mustelid after additionally the wolverine. However, the average weight of three wild females from Iraq was reported as , about the typical weight of male wolverines or male European badgers in late autumn, indicating that they can attain much larger than typical sizes in favourable conditions. However, an adult female and two males in India were relatively small weighing and a median of . Skull length is in males and for females. There are two pairs of mammae. The honey badger possesses an anal pouch which, unusual among mustelids, is eversible, a trait shared with hyenas and mongooses. The smell of the pouch is reportedly "suffocating", and may assist in calming bees when raiding beehives. The skull greatly resembles a larger version of that of a marbled polecat. The dental formula is: . The teeth often display signs of irregular development, with some teeth being exceptionally small, set at unusual angles or absent altogether. Honey badgers of the subspecies signata have a second lower molar on the left side of their jaws, but not the right. Although it feeds predominantly on soft foods, the honey badger's cheek teeth are often extensively worn. The canine teeth are exceptionally short for carnivores. The papillae of the tongue are sharp and pointed, which assists in processing tough foods. The winter fur is long, being long on the lower back, and consists of sparse, coarse, bristle-like hairs, with minimal underfur. Hairs are even sparser on the flanks, belly and groin. The summer fur is shorter (being only long on the back) and even sparser, with the belly being half bare. The sides of the head and lower body are pure black. A large white band covers the upper body, from the top of the head to the base of the tail. Honey badgers of the cottoni subspecies are unique in being completely black. Distribution and habitat The honey badger ranges through most of sub-Saharan Africa, from the Western Cape, South Africa, to southern Morocco and southwestern Algeria and outside Africa through Arabia, Iran, and Western Asia to Turkmenistan and the Indian Peninsula. It is known to range from sea level to as much as in the Moroccan High Atlas and in Ethiopia's Bale Mountains. Throughout its range, the honey badger is predominantly found in deserts, mountainous regions and forests. These habitats can have an annual rainfall of as low as 100 mm in dry, arid regions to as high as 2,000 mm. Behaviour and ecology The honey badger is mostly solitary, but has also been sighted in Africa to hunt in pairs. It also uses old burrows of aardvark, warthog and termite mounds. In the Serengeti National Park, the activity levels of the honey badger was largely dependent on the time of year; in the dry season, it was mostly nocturnal, in contrast to the wet season, when it remained active throughout the day, reaching its zenith during crepuscular hours. In the Sariska Tiger Reserve in India, a study concluded that the honey badger was highly nocturnal; a study in the Cauvery Wildlife Sanctuary yielded similar results. The honey badger is a skilled digger, able to dig tunnels into hard ground in 10 minutes. These burrows usually have only one entry, are usually only long with a nesting chamber that is not lined with any bedding. Adults control a patch of land known as a home range. Females establish a large home range that changes in size depending foremost on the abundance of food, and particularly when rearing young, while males' considerably larger home ranges depend on the availability of females in heat; this often leads to males' home ranges intersecting with that of about 13 females. Adult males have an average home range of , compared to females' average of . It is suggested that adult males have a dominance hierarchy, and that females tend to avoid contact with each other, displaying less profound territorial behavior in spite of the 25% overlap in female home ranges. In the wild, honey badgers were confirmed to scent-mark while squatting, and it is suggested that this behaviour is an "important form of communication". They frequently scent-mark their territories with anal gland excretions, feces and urine. According to personal accounts, honey badgers in captivity were said to scent-mark in a squatting position, releasing fluid from their anal glands. The honey badger is famous for its strength, ferocity and toughness. It is known to savagely and fearlessly attack almost any other species when escape is impossible, reportedly even repelling much larger predators such as lion and hyena. In some instances, honey badgers deter large predators by unleashing a pungent yellow liquid produced by the anal glands. They accompany this with a threat display characterized by rattling noises, goosebumps, a straight, upward-facing tail, and general charging behaviour while also holding their heads up high. In a 2018 study, it was found that the presence of large predators had no effect on the population of honey badgers in the Serengeti. This is likely indicative of the honey badger seeking areas comparable to those favoured by larger predators, and perhaps adopting a similar ecological niche. Bee stings, porcupine quills, and animal bites rarely penetrate their skin. If horses, cattle, or Cape buffalos intrude upon a honey badger's burrow, it will attack them. In the Cape Province it is a potential prey species of the African leopard and African rock pythons. The voice of the honey badger is a hoarse "khrya-ya-ya-ya" sound. When mating, males emit loud grunting sounds. Cubs vocalise through plaintive whines, and when confronting dogs, honey badgers scream like bear cubs. Diet The honey badger has the least specialised diet of the weasel family next to the wolverine. It accesses a large part of its food by digging it out of burrows. It often raids beehives in search of both bee larvae and honey. It also feeds on insects, frogs, tortoises, turtles, lizards, rodents, snakes, birds and eggs. It also eats berries, roots and bulbs. When foraging for vegetables, it lifts stones or tears bark from trees. Some individuals have even been observed to chase away lion cubs from kills. It devours all parts of its prey, including skin, hair, feathers, flesh and bones, holding its food down with its forepaws. It feeds on a wide range of animals and seems to subsist primarily on small vertebrates. Honey badgers studied in Kgalagadi Transfrontier Park preyed largely on geckos and skinks (47.9% of prey species), gerbils and mice (39.7% of prey). The bulk of its prey comprised species weighing more than such as cobras, young African rock python and South African springhare. The study also found that males and females caught similar sized-prey, despite their disparity in size. In the Kalahari, honey badgers were also observed to attack domestic sheep and goats, as well as kill and eat black mambas. A honey badger was suspected to have broken up the shells of tent tortoises in the Nama Karoo. In India, honey badgers are said to dig up buried human corpses. Despite popular belief, there is no evidence that honeyguides guide the honey badger. In a 2022 study in the Southern Kalahari Desert, it was found that black-backed jackals fed in such a way that took food away from the honey badger, leading to a 5% decline in total food intake above ground. The honey badgers were preyed upon by larger predators such as spotted hyenas, leopards, and lions. Reproduction The honey badger does not have a specific mating period, and instead breeds at any time of the year. Females have an estimated oestrus period of about 14 days. Their gestation period is thought to last 50–70 days, usually resulting in one to two cubs, which are born blind and hairless. Females give birth in a den, and transport their young from one shelter to another for the first three months. When foraging, females abandon their cubs, and return to suckle them in the den; sightings of females suckling young are generally rare, however, in one instance, a female suckling her young outside the den was observed laying in a supine position with her cub sitted atop her abdomen in an upside down orientation. At about three to five weeks of age, cubs begin developing the adult black-and-white coat, and at eight to twelve weeks, they follow their mother on foraging expeditions; weaning occurs during this period. On average, females will remain with their cubs for 1–1¼ years and during that time, they will teach cubs important life skills such as climbing, foraging and hunting. Not all cubs reach adulthood; in one study, the mortality rate of cubs in the Kgalagadi Transfrontier Park was 37%, and was caused by predation, infanticide and starvation. Although the exact age of when males reach sexually mature is uncertain, several factors indicate that they reach sexual maturity at two to three years of age. Also uncertain is when females reach sexual maturity, however, they are thought to be sexually mature on the onset of independency, the largest indicator of this being the migration of females outside their mothers range not too long after the separation. The lifespan of the species in the wild is unknown, though captive individuals have been known to live for approximately 24 years. Pathogens Honey badgers are known to be susceptible to rabies. In one instance, a seemingly rabid honey badger attacked a dog and a couple of people in separate attacks within the span of two days before being shot. The incident occurred in Kromdraai, South Africa in July 2021. An autopsy of the dead individual confirmed that the rabies arose from canines, both wild and domestic. Parasites that infect honey badgers include flatworms such as Strongyloides akbari, Uncinaria stenocephala, Artyfechinostomum sufrartyfex, Trichostrongylidae, Physaloptera, Ancylostoma, and Rictulariidae. There have also been cases of parasitic worm infections. Blood-sucking parasites known to infect this species include Haemaphysalis indica, Amblyomma javanensis and Rhipicephalus microplus. In addition, the honey badger has been recorded with feline parvovirus. Status As of 2016, the honey badger is listed as Least Concern on the IUCN Red List due to its extensive range. It is mostly threatened by killings from beekeepers and farmers, sometimes with the use of poisons or traps, and is used in traditional medicine and as bushmeat. In other cases, control programs that were meant for other predators such as caracals have led to unintentional honey badger deaths. It is thought that many honey badger populations were eradicated as a result of poisoning alone. The species has been given protection in numerous range countries, such as Algeria, Morocco, Kazakhstan, Uzbekistan, and Turkmenistan. It also occurs in protected areas in many countries, such as the Kruger National Park in South Africa, and the Ustyurt Nature Reserve in Kazakhstan. In Ghana and Botswana, the resident populations are included under CITES Appendix III, while the Indian population is listed in the Wildlife (Protection) Act, 1972, as Scheduled-I. Relationships with humans In popular media, the honey badger has garnered a reputation for being an intelligent, fearless animal, with some people, such as Nick Cummins, adopting the name to symbolize these attributes. Nicknames or titles given to this species include "pound for pound, the most powerful creature in Africa", "most fearless animal in the world", "bravest animal in the world" and "meanest animal in the world". These names stem from the honey badger's ability to repel larger predators, which has been highlighted in such a way as to gives the public audience the impression of invincibility. The noises made when performing the threat display are cited as another component of the honey badger's invincible image. Due to its ability of using tools, the honey badger is considered an intelligent creature and according to a BBC documentary titled Honey Badgers: Masters of Mayhem, captive individuals may work with others as cohesive unit to help unlock gates or enclosures with the use of tools; this has been met with skepticism. The species' supposed fearless attitude is highlighted in the popular comic book Randall's Guide to Nastyass Animals: Honey Badger Don't Care. The native people of Somalia believe that a man becomes infertile after being bitten by a honey badger, hence the wide berth they give to the species. Human–wildlife conflict Honey badgers often become serious poultry predators. Because of their strength and persistence, they are difficult to deter. They are known to rip thick planks from hen-houses or burrow underneath stone foundations. Surplus killing is common during these events, with one incident resulting in the death of 17 Muscovy ducks and 36 chickens. Because of the toughness and looseness of their skin, honey badgers are very difficult to hunt with dogs. Their skin is hard to penetrate, and its looseness allows them to twist and turn on their attackers when held. The only safe grip on a honey badger is on the back of the neck. During the British occupation of Basra in 2007, rumours of "man-eating badgers" emerged from the local population, including allegations that these beasts were released by the British troops, something that the British categorically denied. A British army spokesperson said that the badgers were "native to the region but rare in Iraq" and "are usually only dangerous to humans if provoked". The honey badger has also been reported to dig up human corpses in India. In Kenya, the honey badger is a major reservoir of rabies and is suspected to be a significant contributor to the sylvatic cycle of the disease. In captivity Honey badgers are kept in captivity as pets and to be exhibited in zoos. They are said to be easy to tame, with some reportedly ceasing the utilization of their anal glands. Despite this, when in contact with a handler, honey badgers often release anal gland secretions.
Biology and health sciences
Carnivora
null
361210
https://en.wikipedia.org/wiki/Rational%20function
Rational function
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field . In this case, one speaks of a rational function and a rational fraction over . The values of the variables may be taken in any field containing . Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is . The set of rational functions over a field is a field, the field of fractions of the ring of the polynomial functions over . Definitions A function is called a rational function if it can be written in the form where and are polynomial functions of and is not the zero function. The domain of is the set of all values of for which the denominator is not zero. However, if and have a non-constant polynomial greatest common divisor , then setting and produces a rational function which may have a larger domain than , and is equal to on the domain of It is a common usage to identify and , that is to extend "by continuity" the domain of to that of Indeed, one can define a rational fraction as an equivalence class of fractions of polynomials, where two fractions and are considered equivalent if . In this case is equivalent to A proper rational function is a rational function in which the degree of is less than the degree of and both are real polynomials, named by analogy to a proper fraction in Complex rational functions In complex analysis, a rational function is the ratio of two polynomials with complex coefficients, where is not the zero polynomial and and have no common factor (this avoids taking the indeterminate value 0/0). The domain of is the set of complex numbers such that . Every rational function can be naturally extended to a function whose domain and range are the whole Riemann sphere (complex projective line). A complex rational function with degree one is a Möbius transformation. Rational functions are representative examples of meromorphic functions. Iteration of rational functions on the Riemann sphere (i.e. a rational mapping) creates discrete dynamical systems. Degree There are several non equivalent definitions of the degree of a rational function. Most commonly, the degree of a rational function is the maximum of the degrees of its constituent polynomials and , when the fraction is reduced to lowest terms. If the degree of is , then the equation has distinct solutions in except for certain values of , called critical values, where two or more solutions coincide or where some solution is rejected at infinity (that is, when the degree of the equation decreases after having cleared the denominator). The degree of the graph of a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator. In some contexts, such as in asymptotic analysis, the degree of a rational function is the difference between the degrees of the numerator and the denominator. In network synthesis and network analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called a . Examples The rational function is not defined at It is asymptotic to as The rational function is defined for all real numbers, but not for all complex numbers, since if x were a square root of (i.e. the imaginary unit or its negative), then formal evaluation would lead to division by zero: which is undefined. A constant function such as is a rational function since constants are polynomials. The function itself is rational, even though the value of is irrational for all . Every polynomial function is a rational function with A function that cannot be written in this form, such as is not a rational function. However, the adjective "irrational" is not generally used for functions. Every Laurent polynomial can be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is a subring of the rational functions. The rational function is equal to 1 for all x except 0, where there is a removable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, since x/x is equivalent to 1/1. Taylor series The coefficients of a Taylor series of any rational function satisfy a linear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collecting like terms after clearing the denominator. For example, Multiplying through by the denominator and distributing, After adjusting the indices of the sums to get the same powers of x, we get Combining like terms gives Since this holds true for all x in the radius of convergence of the original Taylor series, we can compute as follows. Since the constant term on the left must equal the constant term on the right it follows that Then, since there are no powers of x on the left, all of the coefficients on the right must be zero, from which it follows that Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by using partial fraction decomposition we can write any proper rational function as a sum of factors of the form and expand these as geometric series, giving an explicit formula for the Taylor coefficients; this is the method of generating functions. Abstract algebra In abstract algebra the concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from any field. In this setting, given a field F and some indeterminate X, a rational expression (also known as a rational fraction or, in algebraic geometry, a rational function) is any element of the field of fractions of the polynomial ring F[X]. Any rational expression can be written as the quotient of two polynomials P/Q with Q ≠ 0, although this representation isn't unique. P/Q is equivalent to R/S, for polynomials P, Q, R, and S, when PS = QR. However, since F[X] is a unique factorization domain, there is a unique representation for any rational expression P/Q with P and Q polynomials of lowest degree and Q chosen to be monic. This is similar to how a fraction of integers can always be written uniquely in lowest terms by canceling out common factors. The field of rational expressions is denoted F(X). This field is said to be generated (as a field) over F by (a transcendental element) X, because F(X) does not contain any proper subfield containing both F and the element X. Notion of a rational function on an algebraic variety Like polynomials, rational expressions can also be generalized to n indeterminates X1,..., Xn, by taking the field of fractions of F[X1,..., Xn], which is denoted by F(X1,..., Xn). An extended version of the abstract idea of rational function is used in algebraic geometry. There the function field of an algebraic variety V is formed as the field of fractions of the coordinate ring of V (more accurately said, of a Zariski-dense affine open set in V). Its elements f are considered as regular functions in the sense of algebraic geometry on non-empty open sets U, and also may be seen as morphisms to the projective line. Applications Rational functions are used in numerical analysis for interpolation and approximation of functions, for example the Padé approximations introduced by Henri Padé. Approximations in terms of rational functions are well suited for computer algebra systems and other numerical software. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials. Rational functions are used to approximate or model more complex equations in science and engineering including fields and forces in physics, spectroscopy in analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo, wave functions for atoms and molecules, optics and photography to improve image resolution, and acoustics and sound. In signal processing, the Laplace transform (for continuous systems) or the z-transform (for discrete-time systems) of the impulse response of commonly-used linear time-invariant systems (filters) with infinite impulse response are rational functions over complex numbers.
Mathematics
Specific functions
null
361245
https://en.wikipedia.org/wiki/Torque%20wrench
Torque wrench
A torque wrench is a tool used to apply a specific torque to a fastener such as a nut, bolt, or lag screw. It is usually in the form of a socket wrench with an indicating scale, or an internal mechanism which will indicate (as by 'clicking', a specific movement of the tool handle in relation to the tool head) when a specified (adjustable) torque value has been reached during application. A torque wrench is used where the tightness of screws and bolts is a crucial parameter of assembly or adjustment. It allows the operator to set the torque applied to the fastener to meet the specification for a particular application. This permits proper tension and loading of all parts. Torque screwdrivers and torque wrenches have similar purposes and may have similar mechanisms. History The first patent for a torque wrench was filed by John H. Sharp of Chicago in 1931. This wrench was referred to as a torque measuring wrench and would be classified today as an indicating torque wrench. In 1935, Conrad Bahr and George Pfefferle patented an adjustable ratcheting torque wrench. The tool featured audible feedback and restriction of back-ratcheting movement when the desired torque was reached. Bahr, who worked for the New York City Water Department, was frustrated at the inconsistent tightness of flange bolts he found while attending to his work. He claimed to have invented the first torque limiting tool in 1918 to alleviate these problems. Bahr's partner, Pfefferle, was an engineer for S.R. Dresser Manufacturing Co and held several patents. Types Beam The most basic form of torque wrench consists of two beams. The first is a lever used to apply the torque to the fastener being tightened and serves also as the handle of the tool. When force is applied to the handle it will deflect predictably and proportionally with said force in accordance with Hooke's law. The second beam is only attached at one end to the wrench head and free on its other, this serves as the indicator beam. Both of these beams run parallel to each other when the tool is at rest, with the indicator beam usually on top. The indicator beam's free end is free to travel over a calibrated scale attached to the lever or handle, marked in units of torque. When the wrench is used to apply torque, the lever bends and the indicating beam stays straight. Thus, the end of the indicating beam points to the magnitude of the torque that is currently being applied. This type of wrench is simple, inherently accurate, and inexpensive. The beam type torque wrench was developed in between late 1920s and early 1930s by Walter Percy Chrysler for the Chrysler Corporation and a company known as Micromatic Hone. Paul Allen Sturtevant—a sales representative for the Cedar Rapids Engineering Company at that time—was licensed by Chrysler to manufacture his invention. Sturtevant patented the torque wrench in 1938 and became the first individual to sell torque wrenches. A more sophisticated variation of the beam type torque wrench has a dial gauge indicator on its body that can be configured to give a visual or electrical indication when a preset torque is reached. Deflecting beam The dual-signal deflecting beam torque wrench was patented by the Australian Warren and Brown company in 1948. It employs the principle of applying torque to a deflecting beam rather than a coil spring. This is claimed to help prolong the accuracy of the wrench throughout its working life, with a greater safety margin on maximum loading and provides more consistent and accurate readings throughout the range of each wrench. The operator can both hear the signal click and see (and feel) a physical indicator when the desired torque is reached. The wrench functions in the same general way as an ordinary beam torque wrench. There are two beams both connected to the head end but only one through which torque is applied. The load carrying beam is straight and runs from head to handle, it deflects when torque is applied. The other beam (indicating beam) runs directly above the deflecting beam for about half of the length then bends away to the side at an angle from the deflecting beam. The indicating beam retains its orientation and shape during operation. Because of this, there is relative displacement between the two beams. The deflecting beam torque wrench differs from the ordinary beam torque wrench in how it utilizes this relative displacement. Attached to the deflecting beam is a scale and onto that is fitted a wedge which can be slid along the length of the scale parallel to the flexing beam. This wedge is used to set the desired torque. Directly facing this wedge is the side of the angled indicating beam. From this side protrudes a pin, which acts as a trigger for another pin, the latter pin is spring loaded, and fires out of the end of the indicating beam once the trigger pin contacts the adjustable wedge. This firing makes a loud click and gives a visual and tactile indication that the desired torque has been met. The indicator pin can be reset by simply pressing it back into the indicating beam. Slipper A slipper type torque wrench consists of a roller and cam (or similar) mechanism. The cam is attached to the driving head, the roller pushes against the cam locking it in place with a specific force which is provided by a spring (which is in many cases adjustable). If a torque which is able to defeat the holding force of the roller and spring is applied, the wrench will slip and no more torque will be applied to the bolt. A slipper torque wrench will not overtighten the fastener by continuing to apply torque beyond a predetermined limit. Click A more sophisticated method of presetting torque is with a calibrated clutch mechanism. One common form uses a ball detent and spring, with the spring preloaded by an adjustable screw thread, calibrated in torque units. The ball detent transmits force until the preset torque is reached, at which point the force exerted by the spring is overcome and the ball "clicks" out of its socket. This design yields greater precision as well as giving tactile and audible feedback. The wrench will not start slipping once the desired torque is reached, it will only click and bend slightly at the head; the operator can continue to apply torque to the wrench without any additional action or warnings from the wrench. A number of variations of this design exist for different applications and different torque ranges. A modification of this design is used in some drills to prevent gouging the heads of screws while tightening them. The drill will start slipping once the desired torque is reached. "No-hub" wrench These are specialized torque wrenches used by plumbers to tighten the clamping bands on hubless soil pipe couplings. They are usually T-handled wrenches with a one-way combination ratchet and clutch. They are preset to a fixed torque designed to secure the coupling adequately but insufficient to damage it. Electronic torque wrenches With electronic (indicating) torque wrenches, measurement is by means of a strain gauge attached to the torsion rod. The signal generated by the transducer is converted to the required unit of torque (e.g. N·m or lbf·ft) and shown on the digital display. A number of different joints (measurement details or limit values) can be stored. These programmed limit values are then permanently displayed during the tightening process by means of LEDs or the display. At the same time, this generation of torque wrenches can store all the measurements made in an internal readings memory. This reading's memory can then be easily transferred to a PC via the interface (RS232) or printed. A popular application of this kind of torque wrench is for in-process documentation or quality assurance purposes. Typical accuracy level would be ±0.5% to 4%. Interchangeable Head Torque Wrenches Interchangeable head torque wrenches are designed to connect several different types of wrench heads, thereby reducing the number of torque wrenches needed. These wrenches are ideal for applications that require multiple fastening tools. They typically have a standard mounting interface that allows for quick changeover from one wrench head to another while ensuring that the torque applied remains accurate. Common interface sizes include 9×12mm and 12×14mm, and interchangeable heads include open-end, ring-end, adjustable, ratchet, etc. Programmable electronic torque / angle wrenches Torque measurement is conducted in the same way as with an electronic torque wrench but the tightening angle from the snug point or threshold is also measured. The angle is measured by an angle sensor or electronic gyroscope. The angle measurement process enables joints which have already been tightened to be recognized. The inbuilt readings memory enables measurements to be statistically evaluated. Tightening curves can be analyzed using the software via the integrated tightening-curve system (force/path graph). This type of torque wrench can also be used to determine breakaway torque, prevail torque and the final torque of a tightening job. Thanks to a special measuring process, it is also possible to display the yield point (yield controlled tightening). This design of torque wrench is highly popular with automotive manufacturers for documenting tightening processes requiring both torque and angle control because, in these cases, a defined angle has to be applied to the fastener on top of the prescribed torque (e.g. + 90° – here the means the snug point/threshold and +90° indicates that an additional angle has to be applied after the threshold). In 1995, Saltus-Werk Max Forst GmbH applied for an international patent for the first electronic torque wrench with angle measurement which did not require a reference arm. Mechatronic torque wrenches Torque measurement is achieved in the same way as with a click-type torque wrench but, at the same time, the torque is measured as a digital reading (click and final torque) as with an electronic torque wrench. This is, therefore, a combination of electronic and mechanical measurements. All the measurements are transferred and documented via wireless data transmission. Users will know they have achieved the desired torque setting when the wrench "beeps". Torque wrench standardization ISO The International Organization for Standardization maintains standard ISO 6789. This standard covers the construction and calibration of hand-operated torque tools. They define two types of torque tool encompassing twelve classes, these are given by the table below. Also given is the percentage allowable deviation from the desired torque. The ISO standard also states that even when overloaded by 25% of the maximum rating, the tool should remain reliably usable after being re-calibrated. Re-calibration for tools used within their specified limits should occur after 5000 cycles of torquing or 12 months, whichever is soonest. In cases where the tool is in use in an organization which has its own quality control procedures, then the calibration schedule can be arranged according to company standards. Tools should be marked with their torque range and the unit of torque as well as the direction of operation for unidirectional tools and the maker's mark. If a calibration certificate is provided, the tool must be marked with a serial number that matches the certificate or a calibration laboratory should give the tool a reference number corresponding with the tool's calibration certificate. ASME The American Society of Mechanical Engineers maintains standard ASME B107.300. This standard has the same type designation as the ISO standard with the addition of the type 3, (limiting) torque tool. This type will release the drive once the desired torque is met so that no more torque can be applied. This standard, however, uses different class designations within each type as well as additional style and design variants within each class. The standard also separates manual and electronic tools into different sections and designations. The ASME and ISO standards cannot be considered compatible. The table below gives some of the types and tolerances specified by the standard for manual torque tools. Tools should be marked with the model number of the tool, the unit of torque and the maker's mark. For unidirectional tools, the word "TORQUES" or "TORQUE" and the direction of operation must also be marked. Using torque wrenches Precision Click type torque wrenches are precise when properly calibrated—however the more complex mechanism can result in loss of calibration sooner than the beam type, where there is little to no malfunction, (however the thin indicator rod can be accidentally bent out of true). Beam type torque wrenches are impossible to use in situations where the scale cannot be directly read—and these situations are common in automotive applications. The scale on a beam type wrench is prone to parallax error, as a result of the large distance between indicator arm and scale (on some older designs). There is also the issue of increased user error with the beam type—the torque has to be read at every use and the operator must use caution to apply loads only at the floating handle's pivot point. Dual-beam or "flat" beam versions reduce the tendency for the pointer to rub, as do low-friction pointers. Extensions The use of cheater bars that extend from the handle end can damage the wrench, so only manufacturer specified equipment should be used. Using socket extensions requires no adjustment of the torque setting. Using a crow's foot or similar extension requires the use of the following equation: using a combination of handle and crow's foot extensions requires the use of the following equation: where: is the wrench indicated torque (setting torque), is the desired torque, is the length of the torque wrench, from the handle to the center of the head, is the length of the crow's foot extension, from the center of the torque wrench head to the center line of the bolt, is the length of the handle extension, from the extension end to the torque wrench handle. These equations only apply if the extension is colinear with the length of the torque wrench. In other cases, the distance from the torque wrench's head to the bolt's head, as if it were in line, should be used. If the extension is set at 90° then no adjustment is required. These methods are not recommended except for extreme circumstances. Storage For click (or other micrometer) types, when not in use, the force acting on the spring should be removed by setting the scale to its minimum rated value in order to prevent permanent set in the spring. Never set a micrometer style torque wrench to zero as the internal mechanism requires a small amount of tension in order to prevent components shifting and reduction of accuracy. Calibration As with any precision tool, torque wrenches should be periodically re-calibrated. As previously stated, according to ISO standards calibration should happen every 5000 operations or every year, whichever comes first. It is possible that torque wrenches can fall up to 10% out of calibration in the first year of use. Calibration, when performed by a specialist service which follows ISO standards, follows a specific process and constraints. The operation requires specialist torque wrench calibration equipment with an accuracy of ±1% or better. The temperature of the area where calibration is being performed should be between 18 °C and 28 °C with no more than a 1 °C fluctuation and the relative humidity should not exceed 90%. Before any calibration work can be done, the tool should be preloaded and torqued without measure according to its type. The tool is then connected to the tester and force is applied to the handle (at no more than 10° from perpendicular) for values of 20%, 60% and 100% of the maximum torque and repeated according to their class. The force should be applied slowly and without jerky or irregular motion. The table below gives more specifics regarding the pattern of testing for each class of torque wrench. While professional calibration is recommended, for some people it would be out of their means. It is possible to calibrate a torque wrench in the home shop or garage. The process generally requires that a certain mass is attached to a lever arm and the torque wrench is set to the appropriate torque to lift said mass. The error within the tool can be calculated and the tool may be able to be altered or any work done adjusted for this error.
Technology
Measuring instruments
null
361384
https://en.wikipedia.org/wiki/Liquefied%20petroleum%20gas
Liquefied petroleum gas
Liquefied petroleum gas, also referred to as liquid petroleum gas (LPG or LP gas), is a fuel gas which contains a flammable mixture of hydrocarbon gases, specifically propane, n-butane and isobutane. It can sometimes contain some propylene, butylene, and isobutene. LPG is used as a fuel gas in heating appliances, cooking equipment, and vehicles. It is increasingly used as an aerosol propellant and a refrigerant, replacing chlorofluorocarbons in an effort to reduce damage to the ozone layer. When specifically used as a vehicle fuel, it is often referred to as autogas or just as gas. Varieties of LPG that are bought and sold include mixes that are mostly propane (), mostly butane (), and, most commonly, mixes including both propane and butane. In the northern hemisphere winter, the mixes contain more propane, while in summer, they contain more butane. In the United States, mainly two grades of LPG are sold: commercial propane and HD-5. These specifications are published by the Gas Processors Association (GPA) and the American Society of Testing and Materials. Propane/butane blends are also listed in these specifications. Propylene, butylenes and various other hydrocarbons are usually also present in small concentrations such as , , and . HD-5 limits the amount of propylene that can be placed in LPG to 5% and is utilized as an autogas specification. A powerful odorant, ethanethiol, is added so that leaks can be detected easily. The internationally recognized European Standard is EN 589. In the United States, tetrahydrothiophene (thiophane) or amyl mercaptan are also approved odorants, although neither is currently being utilized. LPG is prepared by refining petroleum or "wet" natural gas, and is almost entirely derived from fossil fuel sources, being manufactured during the refining of petroleum (crude oil), or extracted from petroleum or natural gas streams as they emerge from the ground. It was first produced in 1910 by Walter O. Snelling, and the first commercial products appeared in 1912. It currently provides about 3% of all energy consumed, and burns relatively cleanly with no soot and very little sulfur emission. As it is a gas, it does not pose ground or water pollution hazards, but it can cause air pollution. LPG has a typical specific calorific value of 46.1 MJ/kg compared with 42.5 MJ/kg for fuel oil and 43.5 MJ/kg for premium grade petrol (gasoline). However, its energy density per volume unit of 26 MJ/L is lower than either that of petrol or fuel oil, as its relative density is lower (about 0.5–0.58 kg/L, compared to 0.71–0.77 kg/L for gasoline). As the density and vapor pressure of LPG (or its components) change significantly with temperature, this fact must be considered every time when the application is connected with safety or custody transfer operations, e.g. typical cuttoff level option for LPG reservoir is 85%. Besides its use as an energy carrier, LPG is also a promising feedstock in the chemical industry for the synthesis of olefins such as ethylene and propylene. As its boiling point is below room temperature, LPG will evaporate quickly at normal temperatures and pressures and is usually supplied in pressurized steel vessels. They are typically filled to 80–85% of their capacity to allow for thermal expansion of the contained liquid. The ratio of the densities of the liquid and vapor varies depending on composition, pressure, and temperature, but is typically around 250:1. The pressure at which LPG becomes liquid, called its vapour pressure, likewise varies depending on composition and temperature; for example, it is approximately for pure butane at , and approximately for pure propane at . LPG in its gaseous phase is still heavier than air, unlike natural gas, and thus will flow along floors and tend to settle in low spots, such as basements. There are two main dangers to this. The first is a possible explosion if the mixture of LPG and air is within the explosive limits and there is an ignition source. The second is suffocation due to LPG displacing air, causing a decrease in oxygen concentration. A full LPG gas cylinder contains 86% liquid; the ullage volume will contain vapour at a pressure that varies with temperature. Uses LPG has a wide variety of uses in many different markets as an efficient fuel container in the agricultural, recreation, hospitality, industrial, construction, sailing and fishing sectors. It can serve as fuel for cooking, central heating and water heating and is a particularly cost-effective and efficient way to heat off-grid homes. Cooking LPG is used for cooking in many countries for economic reasons, for convenience or because it is the preferred fuel source. In India, nearly 28.5 million metric tons of LPG were consumed in the 2023-24 financial year in the domestic sector, mainly for cooking. In 2016, the number of domestic connections was 215 million (i.e., one connection for every six people) with a circulation of more than 350 million LPG cylinders. Most of the LPG requirement is imported. Piped city gas supply in India is not yet developed on a major scale. LPG is subsidised by the Indian government for domestic users. An increase in LPG prices has been a politically sensitive matter in India as it potentially affects the middle class voting pattern. LPG was once a standard cooking fuel in Hong Kong; however, the continued expansion of town gas to newer buildings has reduced LPG usage to less than 24% of residential units. However, other than electric, induction, or infrared stoves, LPG-fueled stoves are the only type available in most suburban villages and many public housing estates. LPG is the most common cooking fuel in Brazilian urban areas, being used in virtually all households, with the exception of the cities of Rio de Janeiro and São Paulo, which have a natural gas pipeline infrastructure. Since 2001, poor families receive a government grant ("Vale Gás") used exclusively for the acquisition of LPG. Since 2003, this grant is part of the government's main social welfare program ("Bolsa Família"). Also, since 2005, the national oil company Petrobras differentiates between LPG destined for cooking and LPG destined for other uses, establishing a lower price for the former. This is a result of a directive from the Brazilian federal government, but its discontinuation is currently being debated. LPG is commonly used in North America for domestic cooking and outdoor grilling. Rural heating Predominantly in Europe and rural parts of many countries, LPG can provide an alternative to electric heating, heating oil, or kerosene. LPG is most often used in areas that do not have direct access to piped natural gas. In the UK about 200,000 households use LPG for heating. LPG can be used as a power source for combined heat and power technologies (CHP). CHP is the process of generating both electrical power and useful heat from a single fuel source. This technology has allowed LPG to be used not just as fuel for heating and cooking, but also for decentralized generation of electricity. LPG can be stored in a variety of manners. LPG, as with other fossil fuels, can be combined with renewable power sources to provide greater reliability while still achieving some reduction in CO2 emissions. However, as opposed to wind and solar renewable energy sources, LPG can be used as a standalone energy source without the prohibitive expense of electrical energy storage. In many climates, renewable sources such as solar and wind power would still require the construction, installation and maintenance of reliable baseload power sources such as LPG fueled generation to provide electrical power during the entire year. 100% wind/solar is possible, the caveat being that the expense of the additional generation capacity necessary to charge batteries plus the cost of battery electrical storage makes this option economically feasible in only a minority of situations. Motor fuel When LPG is used to fuel internal combustion engines, it is often referred to as autogas or auto propane. In some countries, it has been used since the 1940s as a petrol alternative for spark ignition engines. In some countries, there are additives in the liquid that extend engine life and the ratio of butane to propane is kept quite precise in fuel LPG. Two recent studies have examined LPG-fuel-oil fuel mixes and found that smoke emissions and fuel consumption are reduced but hydrocarbon emissions are increased. The studies were split on CO emissions, with one finding significant increases, and the other finding slight increases at low engine load but a considerable decrease at high engine load. Its advantage is that it is non-toxic, non-corrosive and free of tetraethyllead or any additives, and has a high octane rating (102–108 RON depending on local specifications). It burns more cleanly than petrol or fuel-oil and is especially free of the particulates present in the latter. LPG has a lower energy density per liter than either petrol or fuel-oil, so the equivalent fuel consumption is higher. Many governments impose less tax on LPG than on petrol or fuel-oil, which helps offset the greater consumption of LPG than of petrol or fuel-oil. However, in many European countries, this tax break is often compensated by a much higher annual tax on cars using LPG than on cars using petrol or fuel-oil. Propane is the third most widely used motor fuel in the world. 2013 estimates are that over 24.9 million vehicles are fueled by propane gas worldwide. Over 25 million tonnes (over 9 billion US gallons) are used annually as a vehicle fuel. Not all automobile engines are suitable for use with LPG as a fuel. LPG provides less upper cylinder lubrication than petrol or diesel, so LPG-fueled engines are more prone to valve wear if they are not suitably modified. Many modern common rail diesel engines respond well to LPG use as a supplementary fuel. This is where LPG is used as fuel as well as diesel. Systems are now available that integrate with OEM engine management systems. Conversion kits can switch a vehicle dedicated to gasoline to using a dual system, in which both gasoline and LPG are used in the same vehicle. In 2020, BW LPG successfully retrofitted a Very Large Gas Carrier (VLGC) with LPG propulsion technology, pioneering LPG's application in large-scale maritime operations. LPG’s lowers emissions of carbon dioxide, sulfur oxides, nitrogen oxides, and particulate matter align with stricter standards set by the International Maritime Organization (IMO), making LPG a viable transition option as the maritime industry transitions towards net zero carbon emissions. Conversion to gasoline LPG can be converted into alkylate which is a premium gasoline blending stock because it has exceptional anti-knock properties and gives clean burning. Refrigeration LPG is instrumental in providing off-the-grid refrigeration, usually by means of a gas absorption refrigerator. Blended from pure, dry propane (refrigerant designator R-290) and isobutane (R-600a) the blend "R-290a" has negligible ozone depletion potential, very low global warming potential and can serve as a functional replacement for R-12, R-22, R-134a and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion. Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. One particular test, conducted by a professor at the University of New South Wales, unintentionally tested the worst-case scenario of a sudden and complete refrigerant expulsion into the passenger compartment followed by subsequent ignition. He and several others in the car sustained minor burns to their face, ears, and hands, and several observers received lacerations from the burst glass of the front passenger window. No one was seriously injured. Aerosol propellant Global production Global LPG production reached over 292 million metric tons per year (Mt/a) in 2015, while global LPG consumption to over 284 Mt/a. 62% of LPG is extracted from natural gas while the rest is produced by petroleum refineries from crude oil. 44% of global consumption is in the domestic sector. The U.S. is the leading producer and exporter of LPG. Security of supply Because of the natural gas and the oil-refining industry, Europe is almost self-sufficient in LPG. Europe's security of supply is further safeguarded by: a wide range of sources, both inside and outside Europe; a flexible supply chain via water, rail and road with numerous routes and entry points into Europe. According to 2010–12 estimates, proven world reserves of natural gas, from which most LPG is derived, stand at 300 trillion cubic meters (10,600 trillion cubic feet). Production continues to grow at an average annual rate of 2.2%. Comparison with natural gas LPG is composed mainly of propane and butane, while natural gas is composed of the lighter methane and ethane. LPG, vaporised and at atmospheric pressure, has a higher calorific value (46 MJ/m3 equivalent to 12.8 kWh/m3) than natural gas (methane) (38 MJ/m3 equivalent to 10.6 kWh/m3), which means that LPG cannot simply be substituted for natural gas. In order to allow the use of the same burner controls and to provide for similar combustion characteristics, LPG can be mixed with air to produce a synthetic natural gas (SNG) that can be easily substituted. LPG/air mixing ratios average 60/40, though this is widely variable based on the gases making up the LPG. The method for determining the mixing ratios is by calculating the Wobbe index of the mix. Gases having the same Wobbe index are held to be interchangeable. LPG-based SNG is used in emergency backup systems for many public, industrial and military installations, and many utilities use LPG peak shaving plants in times of high demand to make up shortages in natural gas supplied to their distributions systems. LPG-SNG installations are also used during initial gas system introductions when the distribution infrastructure is in place before gas supplies can be connected. Developing markets in India and China (among others) use LPG-SNG systems to build up customer bases prior to expanding existing natural gas systems. LPG-based SNG or natural gas with localized storage and piping distribution network to the households for catering to each cluster of 5000 domestic consumers can be planned under the initial phase of the city gas network system. This would eliminate the last mile LPG cylinders road transport which is a cause of traffic and safety hurdles in Indian cities. These localized natural gas networks are successfully operating in Japan with feasibility to get connected to wider networks in both villages and cities. Environmental effects Commercially available LPG is currently derived mainly from fossil fuels. Burning LPG releases carbon dioxide, a greenhouse gas. The reaction also produces some carbon monoxide. LPG does, however, release less per unit of energy than does coal or oil, but more than natural gas. It emits 81% of the per kWh produced by oil, 70% of that of coal, and less than 50% of that emitted by coal-generated electricity distributed via the grid. Being a mix of propane and butane, LPG emits less carbon per joule than butane but more carbon per joule than propane. LPG burns more cleanly than higher molecular weight hydrocarbons because it releases less particulate matter. As it is much less polluting than most traditional solid-fuel stoves, replacing cookstoves used in developing countries with LPG is one of the key strategies adopted to reduce household air pollution in the developing world. Fire/explosion risk and mitigation In a refinery or gas plant, LPG must be stored in pressure vessels. These containers are either cylindrical and horizontal (sometimes referred to as bullet tanks) or spherical (of the Horton sphere type). Typically, these vessels are designed and manufactured according to some code. In the United States, this code is governed by the American Society of Mechanical Engineers (ASME). LPG containers have pressure relief valves, such that when subjected to exterior heating sources, they will vent LPGs to the atmosphere or a flare stack. If a tank is subjected to a fire of sufficient duration and intensity, it can undergo a boiling liquid expanding vapor explosion (BLEVE). This is typically a concern for large refineries and petrochemical plants that maintain very large containers. In general, tanks are designed so that the product will vent faster than pressure can build to dangerous levels. One remedy that is utilized in industrial settings is to equip such containers with a measure to provide a fire-resistance rating. Large, spherical LPG containers may have up to a 15 cm steel wall thickness. They are equipped with an approved pressure relief valve. A large fire in the vicinity of the vessel will increase its temperature and pressure. The relief valve on the top is designed to vent off excess pressure in order to prevent the rupture of the container itself. Given a fire of sufficient duration and intensity, the pressure being generated by the boiling and expanding gas can exceed the ability of the valve to vent the excess. Alternatively, if, due to continued venting, the liquid level drops below the area being heated, the tank structure can be overheated and subsequently weakened in that area. If either occurs, the container may rupture violently, launching pieces of the vessel at high velocity, while the released products can ignite as well, potentially causing catastrophic damage to anything nearby, including other containers. People can be exposed to LPG in the workplace by breathing it in, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for LPG exposure in the workplace as 1000 ppm (1800 mg/m3) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1000 ppm (1800 mg/m3) over an 8-hour workday. At levels of 2000 ppm, 10% of the lower explosive limit, LPG is considered immediately dangerous to life and health (due solely to safety considerations pertaining to risk of explosion).
Technology
Fuel
null
8943954
https://en.wikipedia.org/wiki/China%20Railway%20High-speed
China Railway High-speed
China Railway High-speed (CRH) is a high-speed rail service operated by China Railway. The introduction of CRH series was a major part of the sixth national railway speedup, implemented on April 18, 2007. By the end of 2020, China Railway High-speed provided service to all provinces in China, and operated just under passenger tracks in length, accounting for about two-thirds of the world's high-speed rail tracks in commercial service. China has revealed plans to extend the HSR to 70,000 km by year 2035. It is the world's most extensively used railway service, with 2.29 billion bullet train trips delivered in 2019 and 2.16 billion trips in 2020, bringing the total cumulative number of trips to 13 billion as of 2020. Over 1000 sets of rolling stock are operated under the CRH brand including Hexie CRH1/2A/5 that are designed to have a maximum speed of , and CRH2C/3 have a maximum speed of . The indigenous designed CRH380A have a maximum test speed of with commercial operation speed of 350 km/h. The fastest train set, CRH380BL, attained a maximum test speed of . In 2017, the China Standardized EMU brand including CR400AF/BF and CR200J joined China Railway High-speed and are designated as Fuxing together with letters CR (China Railway). With a gradual plan, the CR brand is going to replace the current CRH brand in service. China’s CRH380A Hexie developed by CSR Corporation Limited. it is designed to operate comfortably at a speed of 350 km/h (217 mph) and a maximum speed of 380 km/h (236 mph),it is also the fastest train in the world. During testing it also reached 486.1 km/h (302.0 mph). Depending on their speed, there are 3 categories of high speed trains, G, D and C (G and some C being the fastest at 350 km/h, D having a speed of 250 km/h and C having a speed of 200 km/h). High-speed rail network High-speed rail services were first introduced in 2007 operating with CRH rolling stock. Those run on existing lines that have been upgraded to speeds of up to and on newer dedicated high-speed track rated up to . China will continue to operate the largest high-speed rail (HSR) network in the world by the end of 2021, with a length of over 40,000 km (24,855 mi). Beijing to Hong Kong High Speed Railway, the longest HSR route in the world, stretches 2,440 km (1,516 mi). CRH service on dedicated high-speed lines CRH service on upgraded conventional lines As of September 2010, there were of upgraded conventional railways in China that can accommodate trains running speeds of 200 to 250 km/h. Over time with the completion of the national high-speed passenger-dedicated rail network, more CRH service will shift from these lines to the high-speed dedicated lines. A. Intercity service (typically, listed in schedules as C-series or D-series trains): Beijing – Beidaihe, Qinhuangdao Beijing – Tianjin, Tanggu Beijing – Shijiazhuang, Taiyuan Shanghai – Kunshan, Suzhou, Wuxi, Changzhou, Nanjing, Hefei, Xuzhou, Nantong Shanghai – Hangzhou, Yiwu, Jinhua, Quzhou Nanjing – Hangzhou Guangzhou – Shenzhen Shenzhen – Jiangmen – Zhanjiang Wuhan – Zhengzhou, Changsha Changsha – Nanchang Xi'an – Baoji B. Long-haul service (typically, listed in schedules as G-series or D-series trains): Beijing – Shenyang, Changchun, Harbin Beijing – Jinan, Qingdao, Shanghai Beijing – Zhengzhou, Wuhan Shanghai – Zhengzhou, Qingdao, Shenyang Shanghai – Nanchang Wuhan – Changsha – Guangzhou Overnight high-speed trains Unlike the "conventional" (non-CRH trains), which run round the clock, most high-speed rail lines operations shut down each night. There are several sleeper EMU services (abbreviated 动卧, ) running on the upgraded rail or high-speed lines operated with CRH1E and CRH2E trains. Conventional higher-speed Z-series overnight rail services may also use certain sections of the high-speed rail network; e.g., the planned Shanghai-Chengdu train Z121/2/3/4 will use the Huhanrong PDL from Nanjing to Wuhan. With the schedule change planned for December 21, 2012, some of these trainsets will be re-purposed to also provide overnight high-speed service between Shanghai and Xi'an North. In the 2014, Chunyun season, overnight HSR trains first ran on Beijing-Guangzhou (Jingguang) and other lines. In November 2016, CRRC Changchun unveiled CRH5E bullet train carriages with sleeper berths. Made in the CRRC factory in Changchun and nicknamed Panda, they are capable of running at 250 km/h, operate at -40 degrees Celsius, have Wi-Fi hubs and contain sleeper berths that fold into seats during the day. In 2017, CRRC unveiled a high speed train with double decked sleeper "capsules" classed as the CRH2E series high speed rail train. On January 5, 2019, the CR200J entered service replacing many locomotive-hauled trains. Rolling stock China Railway High-speed runs different electric multiple unit trainsets, the name Hexie Hao () is for designs which are imported from other nations and designated CRH-1 through CRH-5 and CRH380A(L), CRH380B(L), and CRH380C(L). CRH trainsets are intended to provide fast and convenient travel between cities. Some of the Hexie Hao train sets are manufactured locally through technology transfer, a key requirement for China. The signalling, track and support structures, control software, and station design are developed domestically with foreign elements as well. By 2010, the track system as a whole is predominantly Chinese. China currently holds many new patents related to the internal components of these trains, re-designed to allow the trains to run at higher speeds than the foreign designs allowed. However, these patents are only valid within China, and as such hold no international power. The weakness on intellectual property of Hexie Hao causes obstructions for China to export its high-speed rail related products, which lead to the development of the completely redesigned train brand called Fuxing Hao (), based on indigenous technologies. The trainsets are as follows: Hexie (Harmony) CRH1 produced by Bombardier Transportation's joint venture Sifang Power (Qingdao) Transportation (BST), CRH1A, and CRH1B, nicknamed "Metro" or "Bread", derived from Bombardier's Regina; CRH1E, nicknamed "Lizard", is Bombardier's ZEFIRO 250 design CRH1A: sets consists of 8 cars; maximum operating speed of 250 km/h CRH1B: a modified 16-car version; maximum operating speed of 250 km/h CRH1E: a 16-car high-speed sleeper version; maximum operating speed of 250 km/h CRH2: nicknamed "Hairtail", derived from E2 Series 1000 Shinkansen CRH2A: In 2006, China unveiled CRH2, a modified version of the Japanese Shinkansen E2-1000 series. An order for 60 8-car sets had been placed in 2004, with the first few built in Japan, the rest produced by Sifang Locomotive and Rolling Stock in China. CRH2B: a modified 16-car version of CRH2; maximum operating speed of 250 km/h CRH2C (Stage one): a modified version of CRH2 with a maximum operating speed up to 300 km/h as a result of replacing two intermediate trailer cars with motored cars CRH2C (Stage two): a modified version of CRH2C (stage one) has a maximum operating speed up to 350 km/h by using more powerful motors CRH2E: a modified 16-car version of CRH2 with sleeping cars CRH3: nickname "Rabbit", derived from Siemens ICE3 (class 403); 8-car sets; maximum operating speed of 350 km/h CRH5A: derived from Alstom Pendolino ETR600; 8-car sets; maximum operating speed of 250 km/h CRH6: designed by CSR Puzhen and CSR Sifang, will be manufactured by CSR Jiangmen. It is designed to have two versions: one with a top operating speed of 220 km/h; the other with a top operating speed of 160 km/h. They will be used on 200 km/h or 250 km/h Inter-city High Speed Rail lines; planned to enter service by 2011 CRH380A; Maximum operating speed of 380 km/h. Developed by CSR based on CRH2 and manufactured by Sifang Locomotive and Rolling Stock; entered service in 2010 CRH380A: 8-car version CRH380AL: 16-car version CRH380B: upgraded version of CRH3; maximum operating speed of 380 km/h, manufactured by Tangshan Railway Vehicle and CRRC Changchun Railway Vehicles; entered service in 2011 CRH380B: 8-car version CRH380BL: 16-car version CRH380CL: designed and manufactured by CRRC Changchun Railway Vehicles. Maximum operating speed of 380 km/h; entered service in 2012 CRH380D: also named Zefiro 380; maximum operating speed of 380 km/h, manufactured by Bombardier Sifang (Qingdao) Transportation Ltd.; entered service in 2012 CRH380D: 8-car version CRH380DL: 16-car version (Cancelled in place of additional CRH1A and Zefiro 250NG sets) CRH1A, B,E, CRH2A, B,E, and CRH5A are designed for a maximum operating speed (MOR) of 200 km/h and can reach up to 250 km/h. CRH3C and CRH2C designs have an MOR of 300 km/h, and can reach up to 350 km/h, with a top testing speed of more than 380 km/h. However, issues such as maintenance costs, comfort, and safety make the maximum speed of more than 380 km/h impractical and remain limiting factors. Fuxing (Rejuvenation) CR400AF: Maximum operating speed of 400 km/h; Developed by CRRC Qingdao Sifang, guided by Chinese EMU standard. CR400AF: 8-car version CR400AF-A: 16-car version CR400AF-B: 17-car version CR400BF: Maximum operating speed of 400 km/h; Developed by CRRC Changchun Railway Vehicles, guided by Chinese EMU standard. CR400BF: 8-car version CR400BF-A: 16-car version CR400BF-B: 17-car version CR300AF: Maximum operating speed of 300 km/h; Developed by CRRC Qingdao Sifang, guided by Chinese EMU standard. CR300BF: Maximum operating speed of 300 km/h; Developed by CRRC Changchun Railway Vehicles, guided by Chinese EMU standard. CR200J: Maximum operating speed of 200 km/h; Developed by CRRC Nanjing Puzhen, CRRC Qingdao Sifang, CRRC Tangshan, CRRC Zhuzhou Locomotive, CRRC Datong and CRRC Dalian. Chinese MOR CRH trainsets order timetable Chinese MOR CRH trainsets order timetable Chinese CRH trainsets delivery timetable Based on data published by Sinolink Securities; some small changes were made according to the most recent news. All CRH380B and CRH380C units to be delivered before 2012. All CRH380D units to be delivered before 2014. Hexie trains Fuxing trains Ridership Annual HSR ridership is highest in the world and has ramped up very quickly, as self-reported by rail authorities. China is the third country, after Japan and France, to have reached one billion cumulative HSR passengers. Ridership in 2018 was above 2 billion per year. Nevertheless, as a breakdown for lines and services is not available, system ridership may be overestimated given that transfer connections within the system may be counted as new passengers each time. Technology development Before the introduction of foreign technology, China conducted independent attempts to domestically develop high-speed rail technology. Some notable results included the China Star, but domestic companies lacked the technology and expertise of foreign companies, and the research process consumed a large amount of time. In 2004, the Chinese State Council and the Ministry of Railways defined a modern railway technology and equipment policy as "the introduction of advanced technology, the joint design and production, to build China brand". The realization of the railway "leapfrog development" is the key task required to develop and utilize the technology required for high-speed trains (higher than per hour). In 2007, Chinese state media quoted the People's Republic of China Ministry of Railways spokesman Zhang Shuguang to have stated that due to historical reasons, China's overall railway technology and equipment is similar to that of developed countries' rail systems in the 1970s; high-speed rolling stock development is still in its infancy stage. And that if using only their own resources and expertise, the country might need a decade or longer to catch up with developed nations. Technology introduction On April 9, 2004, the Chinese government held a conference on modern railway equipment and rolling stock, in which they drafted the current Chinese plan to modernize the country's railway infrastructure with advanced technologies. On June 17, 2004, the Ministry of Railways launched the first round of bidding on the high-speed rail technology, but the company must be: legally registered in the PRC, with rail EMU manufacturing capacity able to manufacture trains with the ability to reach High-speed EMU design and manufacturing technology companies, including Siemens, Alstom, Kawasaki Heavy Industries and Bombardier, initially had hoped to enter into a joint venture in China, but was rejected by the Ministry of Railways. The MOR set these guidelines for joint ventures to be acceptable: comprehensive transfer of key technologies lowest price in the world use of a Chinese brand A comprehensive transfer of technology to Chinese enterprises (especially in systems integration, AC drive and other core technologies) was requested to allow domestic enterprises to access and utilise the core technology. While foreign partners might provide technical services and training, the Chinese companies must ultimately be able to function without the partnership. Railway equipment manufacturers in China were free to choose foreign partners, but foreign firms must pre-bid and sign the technology transfer agreement with China's domestic manufacturers, so the Chinese rolling stock manufacturers could comprehensively and systematically learn advanced foreign technology. However, this requirement to sign over all rights to the technology used in the trains was a significant barrier to international involvement in the project, as the companies would lose access to any technology that they used on the trains. In the first round of bidding, 140 rolling stock orders were divided into seven packages of twenty orders each. After extensive review and negotiation, three consortiums won the bid: Changchun Railway Vehicles Co., Ltd. (owned by CNR) with France's Alstom Sifang Locomotive (owned by CSR) with Japan's Kawasaki Heavy Industries Sifang Locomotive (owned by CSR) with Canada's Bombardier These three consortiums were each given three, three, and one twenty order packages respectively. Germany's Siemens, as a result of an expensive technology bid — the prototype vehicle cost was 350 million yuan each column, technology transfer fee 390 million euros — did not get any orders in the first round. EMU tendered 22.7 billion yuan for technology transfer payments in the first payment, accounting for 51 per cent of the amount of the tender. In November 2005, the Chinese Ministry of Railways and Siemens reached an agreement, and Siemens in a joint venture with Changchun Railway Vehicles and Tangshan Railway Vehicle (both owned by CNR) was awarded sixty high-speed train orders. Innovation The introduction of high-speed trains, a foreign advanced technology, was required in order to implement China's "Long-term Scientific and Technological Development (2006–2020)". The core technology innovations necessary for a high-speed rail system to meet the needs of China's railway development resulted in the Ministry of Science and Ministry of Railways signing the "independent innovation of Chinese high-speed train cooperation agreement Joint Action Plan" on February 26, 2008. Academicians and researchers from CAS, Tsinghua University, Zhejiang University, Southwest Jiaotong University, and Beijing Jiaotong University have committed to working together on basic research into improving China's scientific and industrial resources into developing a high-speed train system. Under the agreement, China's joint action plan for improvement of train service and infrastructure has four components: Develop key technologies to create a network capable of supporting trains' speeds of and higher Establish intellectual property rights and international competitiveness Ministry of Science and the Ministry of Railways will work together to enhance industry research alliances, and innovation capability Promote China-related material and equipment capacity The Chinese Ministry of Science has invested nearly 10 billion yuan in this science and technology plan, which is by far the largest investment program. The project has brought together a total of 25 universities, 11 research institutes, and national laboratories, and 51 engineering research centers. The Ministry of Science hopes to develop basic research sufficient to produce key technologies necessary to develop trains capable of through the "863 Project" and "973 Project". Technology export On July 27, 2009, Chinese Ministry deputy chief engineer Zhang Shuguang stated that the United States, Saudi Arabia and Brazil are interested in Chinese high-speed railway technology. On July 28, the Federal Railroad Administration and the US government are negotiating on the introduction of Chinese railway technology. On October 14, 2009, Prime minister of Russia Vladimir Putin and the Russian Railroad Administration signed an Organizing and developing railway in Russia memo with Ministry of Railways of China, planning to build a high-speed railway from Vladivostok to Khabarovsk. Accidents On July 23, 2011 at approximately 20:00 CST, two high-speed trains travelling on the Yongtaiwen railway line No. D301 and No. D3115 bound for Fuzhou collided on a viaduct near Wenzhou, Zhejiang, leading to 40 deaths and 191 injuries. Both trains were on the same rail track, headed in the same direction. D3115 ground to a halt in front of D301 due to a loss of electric power caused by lightning striking a viaduct near the Ou River. Signalling systems purportedly failed, and D301 rear-ended the first train, sending four carriages off the viaduct. On June 4, 2022 at 10:30 CST, train D2809 bound for Guangzhou East from Guiyang North ran into a landslide near the Rongjiang Station on the Guiyang-Guangzhou High-speed railway. The driver engaged emergency brake, and two carriages were derailed and hit the platform of Rongjiang Station, killing the driver and injuring 8 passengers. The remaining 136 passengers were safely evacuated. This is the second fatal incident in China’s high speed rail history.
Technology
High-speed rail
null