id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1706399
https://en.wikipedia.org/wiki/Budapest%20Metro
Budapest Metro
The Budapest Metro (, ) is the rapid transit system in the Hungarian capital Budapest. Opened in 1896, it is the world's second oldest electrified underground railway after the City and South London Railway of 1890, now a part of London Underground, and the third oldest underground railway with multiple stations, after the originally steam-powered Metropolitan Railway, now a part of London Underground (1863), and the Mersey Railway, now part of Merseyrail in Liverpool (1886). Budapest's first line, Line 1, was completed in 1896. The M1 line became an IEEE Milestone due to the radically new innovations in its era: "Among the railway's innovative elements were bidirectional tram cars; electric lighting in the subway stations and tram cars; and an overhead wire structure instead of a third-rail system for power." In 2002, the M1 line was listed as a UNESCO World Heritage Site. History To clarify where the first "metro" in continental Europe was built, a few distinctions must be made. While the original metro line M1 is the oldest electrified underground railway in continental Europe, it is not the oldest underground railway. Outside of the United Kingdom, the oldest fully underground urban railway in the world is the Tünel line in Istanbul, built in 1875. However since Tünel is a funicular railway, it may or may not be considered a "metro", in the classic sense. Therefore, depending on one's definition of a metro, the Budapest Metro is either the oldest or 2nd oldest underground urban railway in continental Europe. The original line M1 ("Földalatti", from Hungarian föld "earth, ground", alatt "under"; so "the underground") ran for 5 km from Vörösmarty tér to Széchenyi fürdő. Line M1 was inaugurated on 2 May 1896, the year of the millennium (the thousandth anniversary of the arrival of the Magyars), by emperor Franz Joseph. It was named "Franz Joseph Underground Electric Railway Company" (). Works on line M2 started in the 1950s, although the first section did not open until 1970. It follows an east–west route, connecting the major Keleti (Eastern) and Déli (Southern) railway stations. Planning for Metro Line 3 began in 1963 and construction started in 1970 with help of Soviet specialists. The first section, consisting of six stations, opened in 1976. It was extended to the south in 1980 with five additional stations, and to the north in 1981, 1984, and 1990, with nine additional stations. With a length of approximately and a total of 20 stations, it is the longest line in Budapest. Construction of the fourth Metro line began in 2006. The line opened after several delays and budget overruns in May 2014. Routes The metro consists of four lines (M1–M4), each denoted by a different colour. M1 (yellow) runs from Mexikói út south-west towards the river. The M2 (red) line travels east–west through the city, crossing the Danube. The M3 (blue) runs in a broadly north–south alignment, interchanging with the three other lines. The M4 (green) line commences at Keleti pályaudvar and travels south-west, crossing the river, to terminate at Kelenföld vasútállomás. Metro line M1 Line M1 runs northeast from the city centre on the Pest side under Andrássy út to the Városliget, or City Park. Like line M3, it does not serve Buda. Metro line M1, the oldest of the metro lines operating in Budapest, has been in constant operation since 1896. In the 1980s and 1990s, the line underwent major reconstruction. During the construction of line M2, space needed to be made for its station at Deák Ferenc tér, as a result, M1's station at Deák Ferenc tér had to be rebuilt approximately 40 meters from the original station. Of its 11 stations currently served, eight are original and three were added during the reconstruction. The original appearance of the old stations has been preserved, and each station features displays of historical photographs and information. As part of the reconstruction, the Millennium Underground Museum in the old station at Deák Ferenc tér connected to the concourse. There are plans for the future extension of the line in both directions. Metro line M2 Line M2 runs east–west from Déli pályaudvar in Buda's Krisztinaváros, through the city center, to Örs vezér tere in eastern Pest. It offers connections to Hungarian State Railways at Déli and Keleti pályaudvars, to metro lines M1 and M3 at Deák Ferenc tér, to M4 at Keleti pályaudvar, to suburban railway lines H8 and H9 at Örs vezér tere, and to suburban railway line H5 at Batthyány tér. Prior to the opening of M4, it was (for more than 45 years) the only metro line that served the Buda side of the city. Metro Line 2 underwent a major reconstruction in the second half of the 2000s, with all of the track replaced and stations revamped by 2007. The entire fleet of Metrovagonmash 81-717/81-714 and Ev/EvA carriages operating on the line were replaced with Alstom Metropolis metro cars by 2013. Planning of a direct connection of line M2 and the suburban railway lines with a shared new station at Örs vezér tere and the addition of a potential new underground station near Hungexpo Budapest Congress and Exhibition Center, offering another interchange point to mainline railways began in 2021. Metro line M3 Line M3 runs in a north–south direction (more exactly, from north-northeast to southeast) on the Pest side of the river and connects several populous residential areas with the Inner City. It has a transfer station with line M1 and line M2 at Deák Ferenc tér, and an interchange with line M4 at Kálvin tér. It is the longest line in the Budapest metro system, its daily ridership is estimated at 610,000. A semi-automatic train drive system was introduced in 1990. A complete renovation of the line started in 2017. The upgrades included reconstructing the stations, rebuilding the track, safety equipment, ventilation and tunnel insulation. Design works were entirely funded by the European Union under the New Széchenyi Plan. The project also included the renovation of the rolling stock and a possible extension of the metro line to Káposztásmegyer. The renovation finished in May 2023, with the opening of Nagyvárad tér and Lehel tér stations. Metro line M4 Line M4 runs southwest–northeast from Kelenföld vasútállomás in Buda's Kelenföld neighborhood to Keleti Railway Station in Józsefváros. With a length of , it connects to Hungarian State Railways at its termini, to the metro line M3 at Kálvin tér, and to line M2 at Keleti pályaudvar. Line M4 was completed in March 2014 and comprises ten stations. Future expansion Metro line M5 Metro line M5 is a proposed north–south railway tunnel to connect the currently separated elements of the suburban rail network, namely the H5, H6 and H7 suburban railway lines, and optionally the Budapest-Esztergom and Budapest-Kunszentmiklós-Tass railway lines. Currently the project does not have mainstream political support, only included in long-term plans. The first phase (planned until 2030) would be the extension and connection of the southern H6 and H7 lines to Astoria metro station via Kálvin tér, thus connecting these lines to metro lines M2, M3 and M4. The second phase would create a connection to metro line M1 as well at Oktogon, M3 at Lehel tér then cross the Danube to the Buda side to connect suburban railway line H5 towards Szentendre. Rolling stock General information Tickets and transfer system The usual BKK tickets and passes can be used on all lines. Single tickets can be re-used when changing metro lines. There are plans for an automated fare collection system. A contract for a system was signed in 2014, but terminated in 2018 without completion. The Budapest Pay&GO system, that was introduced on bus line 100E in June 2023, is planned to begin a test phase on line M1. Starting 1st March 2024, free public transport has been extended for children up to 14 years, and for people 65 years or older including non-Hungarian citizens. In popular culture The internationally acclaimed 2003 Hungarian thriller Kontroll is set and was filmed in the metro system on the line M3. Network map
Technology
Europe_2
null
1706838
https://en.wikipedia.org/wiki/Hip%20fracture
Hip fracture
A hip fracture is a break that occurs in the upper part of the femur (thigh bone), at the femoral neck or (rarely) the femoral head. Symptoms may include pain around the hip, particularly with movement, and shortening of the leg. Usually the person cannot walk. A hip fracture is usually a femoral neck fracture. Such fractures most often occur as a result of a fall. (Femoral head fractures are a rare kind of hip fracture that may also be the result of a fall but are more commonly caused by more violent incidents such as traffic accidents.) Risk factors include osteoporosis, taking many medications, alcohol use, and metastatic cancer. Diagnosis is generally by X-rays. Magnetic resonance imaging, a CT scan, or a bone scan may occasionally be required to make the diagnosis. Pain management may involve opioids or a nerve block. If the person's health allows, surgery is generally recommended within two days. Options for surgery may include a total hip replacement or stabilizing the fracture with screws. Treatment to prevent blood clots following surgery is recommended. About 15% of women break their hip at some point in life; women are more often affected than men. Hip fractures become more common with age. The risk of death in the year following a fracture is about 20% in older people. Signs and symptoms The classic clinical presentation of a hip fracture is an elderly patient who sustained a low-energy fall and now has groin pain and is unable to bear weight. Pain may be referred to the supracondylar knee. On examination, the affected extremity is often shortened and externally rotated compared to the unaffected leg. Complications Nonunion, failure of the fracture to heal, is common in fractures of the neck of the femur, but much more rare with other types of hip fracture. Avascular necrosis of the femoral head occurs frequently (20%) in intracapsular hip fractures, because the blood supply is interrupted. Malunion, healing of the fracture in a distorted position, is very common. The thigh muscles tend to pull on the bone fragments, causing them to overlap and reunite incorrectly. Shortening, varus deformity, valgus deformity, and rotational malunion all occur often because the fracture may be unstable and collapse before it heals. This may not be as much of a concern in patients with limited independence and mobility. Hip fractures rarely result in neurological or vascular injury. Medical Many people are unwell before breaking a hip; it is common for the break to have been caused by a fall due to some illness, especially in the elderly. Nevertheless, the stress of the injury, and a likely surgery, increases the risk of medical illness including heart attack, stroke, and chest infection. Hip fracture patients are at considerable risk for thromboemoblism, blood clots that dislodge and travel in the bloodstream. Deep venous thrombosis (DVT) is when the blood in the leg veins clots and causes pain and swelling. This is very common after hip fracture as the circulation is stagnant and the blood is hypercoagulable as a response to injury. DVT can occur without causing symptoms. A pulmonary embolism (PE) occurs when clotted blood from a DVT comes loose from the leg veins and passes up to the lungs. Circulation to parts of the lungs is cut off which can be very dangerous. Fatal PE may have an incidence of 2% after hip fracture and may contribute to illness and mortality in other cases. Mental confusion is extremely common following a hip fracture. It usually clears completely, but the disorienting experience of pain, immobility, loss of independence, moving to a strange place, surgery, and drugs combine to cause delirium or accentuate pre-existing dementia. Urinary tract infection (UTI) can occur. Patients are immobilized and in bed for many days; they are frequently catheterised, commonly causing infection. Prolonged immobilization and difficulty moving make it hard to avoid pressure sores on the sacrum and heels of patients with hip fractures. Whenever possible, early mobilization is advocated; otherwise, alternating pressure mattresses should be used. Risk factors Hip fracture following a fall is likely to be a pathological fracture. The most common causes of weakness in bone are: Osteoporosis. Other metabolic bone diseases such as Paget's disease, osteomalacia, osteopetrosis and osteogenesis imperfecta. Stress fractures may occur in the hip region with metabolic bone disease. Elevated levels of homocysteine, a toxic 'natural' amino acid. Benign or malignant primary bone tumors are rare causes of hip fractures. Metastatic cancer deposits in the proximal femur may weaken the bone and cause a pathological hip fracture. Infection in the bone is a rare cause of hip fracture. Tobacco smoking (associated with osteoporosis). Mechanism Functional anatomy The hip joint is a ball-and-socket joint. The femur connects at the acetabulum of the pelvis and projects laterally before angling medially and inferiorly to form the knee. Although this joint has three degrees of freedom, it is still stable due to the interaction of ligaments and cartilage. The labrum lines the circumference of the acetabulum to provide stability and shock absorption. Articular cartilage covers the concave area of acetabulum, providing more stability and shock absorption. Surrounding the entire joint itself is a capsule secured by the tendon of the psoas muscle and three ligaments. The iliofemoral, or Y, ligament is located anteriorly and serves to prevent hip hyperextension. The pubofemoral ligament is located anteriorly just underneath the iliofemoral ligament and serves primarily to resist abduction, extension, and some external rotation. Finally the ischiofemoral ligament on the posterior side of the capsule resists extension, adduction, and internal rotation. When considering the biomechanics of hip fractures, it is important to examine the mechanical loads the hip experiences during low energy falls. Biomechanics The hip joint is unique in that it experiences combined mechanical loads. An axial load along the shaft of the femur results in compressive stress. Bending load at the neck of the femur causes tensile stress along the upper part of the neck and compressive stress along the lower part of the neck. While osteoarthritis and osteoporosis are associated with bone fracture as we age, these diseases are not the cause of the fracture alone. Low energy falls from standing are responsible for the majority of fractures in the elderly, but fall direction is also a key factor. Elderly patients tend to fall to the side instead of forward, and the lateral hip strikes the ground first. During a sideways fall, the chances of hip fracture see a 15-fold and 12-fold increase in elderly males and females, respectively. Neurological factors Elderly individuals are also predisposed to hip fractures due to many factors that can compromise proprioception and balance, including medications, vertigo, stroke, and peripheral neuropathy. Diagnosis Physical examination Displaced fractures of the trochanter or femoral neck will classically cause external rotation and shortening of the leg when the patient is laying supine. Imaging Typically, radiographs are taken of the hip from the front (AP view), and side (lateral view). Frog leg views are to be avoided, as they may cause severe pain and further displace the fracture. In situations where a hip fracture is suspected but not obvious on x-ray, an MRI is the next test of choice. If an MRI is not available or the patient can not be placed into the scanner a CT may be used as a substitute. MRI sensitivity for radiographically occult fracture is greater than CT. Bone scan is another useful alternative however substantial drawbacks include decreased sensitivity, early false negative results and decreased conspicuity of findings due to age-related metabolic changes in the elderly. A case demonstrating a possible order of imaging in initially subtle findings: As the patients most often require an operation, full pre-operative general investigation is required. This would normally include blood tests, ECG and chest x-ray. Types X-rays of the affected hip usually make the diagnosis obvious; AP (anteroposterior) and lateral views should be obtained. Trochanteric fractures are subdivided into either intertrochanteric (between the greater and lesser trochanter) or pertrochanteric (through the trochanters) by the Müller AO Classification of fractures. Practically, the difference between these types is minor. The terms are often used synonymously. An isolated trochanteric fracture involves one of the trochanters without going through the anatomical axis of the femur, and may occur in young individuals due to forceful muscle contraction. Yet, an isolated trochanteric fracture may not be regarded as a true hip fracture because it is not cross-sectional. Prevention The majority of hip fractures are the result of a fall, particularly in the elderly. Therefore, identifying why the fall occurred, and implementing treatments or changes, is key to reducing the occurrence of hip fractures. Multiple contributing factors are often identified. These can include environmental factors and medical factors (such as postural hypotension or co-existing disabilities from disease such as Stroke or Parkinson's disease which cause visual and/or balance impairments). A recent study has identified a high incidence of undiagnosed cervical spondylotic myelopathy (CSM) amongst patients with a hip fracture. This is relatively unrecognised consequent of CSM. Additionally, there is some evidence to systems designed to offer protection in the case of a fall. Hip protectors, for example appear to decrease the number of hip fractures among the elderly, but they are often not used. Management Most hip fractures are treated surgically by implanting a prosthesis. Surgical treatment outweighs the risks of nonsurgical treatment which requires extensive bedrest. Prolonged immobilization increases risk of thromboembolism, pneumonia, deconditioning, and decubitus ulcers. Regardless, the surgery is a major stress, particularly in the elderly. Pain is also significant, and can also result in immobilization, so patients are encouraged to become mobile as soon as possible, often with the assistance of physical therapy. Skeletal traction pending surgery is not supported by the evidence. Regional nerve blocks are useful for pain management in hip fractures. Peripheral nerve blocks may reduce pain on movement and delirium, may improve time to first mobilisation, and may reduce the risk of postoperative lower respiratory tract infection. Surgery can be performed under general anaesthesia or with neuraxial techniques (spinal anaesthesia) – choice is based on surgical and patient factors, as outcomes such as mortality and post-procedure complications including pneumonia, MI, stroke or delirium, are not affected by anaesthetic technique. Red blood cell transfusion is common for people undergoing hip fracture surgery due to the blood loss sustained during surgery and from the injury. The benefits of giving blood when the hemoglobin is less than 10 g/dL versus less than 8 g/dL are not clear. Waiting until the hemoglobin was less than 8 g/dL or the person had symptoms may increase the risk of heart problems. Intravenous iron is used in some centres to encourage an increase in haemoglobin levels, but it not known whether this makes a significant difference to outcomes that matter to patients. If operative treatment is refused or the risks of surgery are considered to be too high the main emphasis of treatment is on pain relief. Skeletal traction may be considered for long-term treatment. Aggressive chest physiotherapy is needed to reduce the risk of pneumonia and skilled rehabilitation and nursing to avoid pressure sores and DVT/pulmonary embolism Most people will be bedbound for several months. Non-operative treatment is now limited to only the most medically unstable or demented patients or those who are nonambulatory at baseline with minimal pain during transfers. Surgery on the same day or day following the break is estimated to reduce postoperative mortality in people who are medically stable. Intracapsular fractures For low-grade fractures (Garden types 1 and 2), standard treatment is fixation of the fracture in situ with screws or a sliding screw/plate device. This treatment can also be offered for displaced fractures after the fracture has been reduced. Fractures managed by closed reduction can possibly be treated by percutaneously inserted screws. In elderly patients with displaced or intracapsular fractures surgeons may decide to perform a hemiarthroplasty, replacing the broken part of the bone with a metal implant. However, in elderly people who are medically well and still active, a total hip replacement may be indicated. Independently mobile older adults with hip fractures may benefit from a total hip replacement instead of hemiarthroplasty. Traction is contraindicated in femoral neck fractures due to it affecting blood flow to the head of the femur. The latest evidence suggests that there may be little or no difference between screws and fixed angle plates as internal fixation implants for intracapsular hip fractures in older adults. The findings are based on low quality evidence that can't firmly conclude major difference in hip function, quality of life, and additional surgery. Trochanteric fracture A trochanteric fracture, below the neck of the femur, has a good chance of healing. Closed reduction may not be satisfactory and open reduction then becomes necessary. The use of open reduction has been reported as 8-13% among pertrochanteric fractures, and 52% among intertrochanteric fractures. Both intertrochanteric and pertrochanteric fractures may be treated by a dynamic hip screw and plate, or an intramedullary rod. The fracture typically takes 3–6 months to heal. As it is only common in elderly, removal of the dynamic hip screw is usually not recommended to avoid unnecessary risk of second operation and the increased risk of re-fracture after implant removal. The most common cause for hip fractures in the elderly is osteoporosis; if this is the case, treatment of the osteoporosis can well reduce the risk of further fracture. Only young patients tend to consider having it removed; the implant may function as a stress riser, increasing the risk of a break if another accident occurs. Subtrochanteric fractures Subtrochanteric fractures may be treated with an intramedullary nail or a screw-plate construction and may require traction pre-operatively, though this practice is uncommon. It is unclear if any specific type of nail results in different outcomes than any other type of nail. A lateral incision over the trochanter is made and a cerclage wire is placed around the fracture for reduction. Once reduction has been achieved a guide canal for the nail is made through the proximal cortex and medullary. The nail is inserted through the canal and is fixated proximally and distally with screws. X-rays are obtained to ensure proper reduction and placement of the nail and screws are achieved. Rehabilitation Rehabilitation has been proven to increase daily functional status. Forty percent of individuals with hip fractures are also diagnosed with dementia or mild cognitive impairment which often results in poorer post-surgical outcomes. In such cases enhanced rehabilitation and care models have been shown to have limited positive effects in reducing delirium and hospital length of stay. It is unclear if the use of anabolic steroids affects recovery. A updated Cochrane review (2022) involving over 4000 patients found evidence that gait training, balance and functional tasks training to be particularly effective when compared to conventional care. There is also moderate-certainty evidence that rehabilitation after hip fracture surgery, when delivered by a multidisciplinary team and supervised by an appropriate medical specialist, results in fewer cases of 'poor outcome', like death and deterioration in residential status. There is evidence early mobilisation helps. A UK study analysing data on over 135,000 people who had surgeyy for hip fracture found that people who get out of bed on the day of hip surgery, or the day after, were twice as likely to leave hospital within 30 days. Nutrition supplementation Oral supplements with non-protein energy, protein, vitamins and minerals started before or early after surgery may prevent complications during the first year after hip fracture in aged adults; without seemingly effects on mortality. Surgical complications Deep or superficial wound infection has an approximate incidence of 2%. It is a serious problem as superficial infection may lead to deep infection. This may cause infection of the healing bone and contamination of the implants. It is difficult to eliminate infection in the presence of metal foreign bodies such as implants. Bacteria inside the implants are inaccessible to the body's defence system and to antibiotics. The management is to attempt to suppress the infection with drainage and antibiotics until the bone is healed. Then the implant should be removed, following which the infection may clear up. Implant failure may occur; the metal screws and plate can break, back out, or cut out superiorly and enter the joint. This occurs either through inaccurate implant placement or if the fixation does not hold in weak and brittle bone. In the event of failure, the surgery may be redone or changed to a total hip replacement. Mal-positioning: The fracture can be fixed and subsequently heal in an incorrect position; especially rotation. This may not be a severe problem or may require subsequent osteotomy surgery for correction. Prognosis Hip fractures are very dangerous episodes, especially for elderly and frail patients. The risk of dying from the stress of the surgery and the injury in the first thirty days is about 7%. At one year after fracture, this may reach 30%. If the condition is untreated the pain and immobility imposed on the patient increase that risk. Problems such as pressure sores and chest infections are all increased by immobility. The prognosis of untreated hip fractures is very poor. However, most people who suffer a hip fracture are of relatively low risk of early mortality as deaths are concentrated in a numerically smaller, higher risk group. There are scoring tools available, such as the Nottingham Hip Fracture Score that can provide an estimate of risk based on the factors that are known to place people at higher risk, such as: advanced age; dementia or delirium on admission, anaemia on admission, co-morbidities; not living at home before the fracture; and previous diagnoses of cancer. Post operation Among those affected over the age of 65, 40% are transferred directly to long-term care facilities, long-term rehabilitation facilities, or nursing homes; most of those affected require some sort of living assistance from family or home-care providers. 50% permanently require walkers, canes, or crutches for mobility; all require some sort of mobility assistance throughout the healing process. Most of the recovery of walking ability and activities of daily living occurs within 6 months of the fracture. After the fracture about half of older people recover their pre-fracture level of mobility and ability to perform instrumental activities of daily living, while 40–70% regain their level of independence for basic activities of daily living. Among those affected over the age of 50, approximately 25% die within the next year due to complications such as blood clots (deep venous thrombosis, pulmonary embolism), infections, and pneumonia. Patients with hip fractures are at high risk for future fractures including hip, wrist, shoulder, and spine. After treatment of the acute fracture, the risk of future fractures should be addressed. Currently, only 1 in 4 patients after a hip fracture receives treatment and work up for osteoporosis, the underlying cause of most of the fractures. Current treatment standards include the starting of a bisphosphonate to reduce future fracture risk by up to 50%. Epidemiology Hip fractures are seen globally and are a serious concern at the individual and population level. By 2050, it is estimated that there will be six million cases of hip fractures worldwide. One study published in 2001 found that in the US alone, 310,000 individuals were hospitalized due to hip fractures, which can account for 30% of Americans who were hospitalized that year. Another study found that in 2011, femur neck fractures were among the most expensive conditions seen in US hospitals, with an aggregated cost of nearly $4.9 billion for 316,000 inpatient hospitalizations. Rates of hip fractures are declining in the United States, possibly due to increased use of bisphosphonates and risk management. Falling, poor vision, weight, and height are all seen as risk factors. Falling is one of the most common risk factors for hip fractures. Approximately 90% of hip fractures are attributed to falls from standing height. Given the high morbidity and mortality associated with hip fractures and the cost to the health system, in England and Wales, the National Hip Fracture Database is a mandatory nationwide audit of care and treatment of all hip fractures. Population All populations experience hip fractures but numbers vary with race, gender, and age. Women have three times as many hip fractures as men. In a lifetime, men have an estimated 6% risk whereas postmenopausal women have an estimated 14% risk of having a hip fracture. These statistics provide insight over a lifespan, and conclude that women are twice as likely to have a hip fracture. The overwhelming majority of hip fractures occur in white individuals, while blacks and Hispanics have a lower rate of them. This may be due to their generally greater bone density and also because whites have longer overall lifespan and higher likelihood of reaching an advanced age where the risk of breaking a hip goes up. Deprivation is also a key factor: in England, it has been found that people in the poorest parts of the country are more likely to fracture a hip and less likely to recover well than those in the least deprived areas. Age related Age is the most dominant factor in hip fracture injuries, with most cases occurring in people over 75. The increase of age is related to the increase of the incidence of hip fracture, which is the most frequent cause of hospitalization in centenarians, overcoming congestive heart failure and respiratory infection. Falls are the most common cause of hip fractures; around 30–60% of older adults fall each year. This increases the risk for hip fracture and leads to the increased risk of death in older individuals, the rate of one year mortality is seen from 12 to 37%. For those remaining patients, half of them need assistance and cannot live independently. Also, older adults sustain hip fractures because of osteoporosis, which is a degenerative disease due to age and decrease in bone mass. The average age for sustaining a hip fracture is 77 years old for women and 72 years old for men.
Biology and health sciences
Types
Health
1706886
https://en.wikipedia.org/wiki/Purkinje%20effect
Purkinje effect
The Purkinje effect or Purkinje phenomenon (; sometimes called the Purkinje shift, often pronounced ) is the tendency for the peak luminance sensitivity of the eye to shift toward the blue end of the color spectrum at low illumination levels as part of dark adaptation. In consequence, reds will appear darker relative to other colors as light levels decrease. The effect is named after the Czech anatomist Jan Evangelista Purkyně. While the effect is often described from the perspective of the human eye, it is well established in a number of animals under the same name to describe the general shifting of spectral sensitivity due to pooling of rod and cone output signals as a part of dark/light adaptation. This effect introduces a difference in color contrast under different levels of illumination. For instance, in bright sunlight, geranium flowers appear bright red against the dull green of their leaves, or adjacent blue flowers, but in the same scene viewed at dusk, the contrast is reversed, with the red petals appearing a dark red or black, and the leaves and blue petals appearing relatively bright. The sensitivity to light in scotopic vision varies with wavelength, though the perception is essentially black-and-white. The Purkinje shift is the relation between the absorption maximum of rhodopsin, reaching a maximum at about , and that of the opsins in the longer-wavelength cones that dominate in photopic vision, about (green). In visual astronomy, the Purkinje shift can affect visual estimates of variable stars when using comparison stars of different colors, especially if one of the stars is red. Physiology The Purkinje effect occurs at the transition between primary use of the photopic (cone-based) and scotopic (rod-based) systems, that is, in the mesopic state: as intensity dims, the rods take over, and before color disappears completely, it shifts towards the rods' top sensitivity. The effect occurs because in mesopic conditions the outputs of cones in the retina, which are generally responsible for the perception of color in daylight, are pooled with outputs of rods which are more sensitive under those conditions and have peak sensitivity in blue-green wavelength of 507 nm. Use of red lights The insensitivity of rods to long-wavelength (i.e. red) light has led to the use of red lights under certain special circumstances—for example, in the control rooms of submarines, in research laboratories, aircraft, and in naked-eye astronomy. Red lights are used in conditions where it is desirable to activate both the photopic and scotopic systems. Submarines are well lit to facilitate the vision of the crew members working there, but the control room must be lit differently to allow crew members to read instrument panels yet remain dark adjusted. By using red lights or wearing red goggles (called "dark adaptor goggles"), the cones can receive enough light to provide photopic vision (namely the high-acuity vision required for reading). The rods are not saturated by the bright red light because they are not sensitive to long-wavelength light, so the crew members remain dark adapted. Similarly, airplane cockpits use red lights so pilots can read their instruments and maps while maintaining night vision to see outside the aircraft. Red lights are also often used in research settings. Many research animals (such as rats and mice) have limited photopic vision, as they have far fewer cone photoreceptors. The animal subjects do not perceive red lights and thus experience darkness (the active period for nocturnal animals), but the human researchers, who have one kind of cone (the "L cone") that is sensitive to long wavelengths, are able to read instruments or perform procedures that would be impractical even with fully dark adapted (but low acuity) scotopic vision. For the same reason, zoo displays of nocturnal animals often are illuminated with red light. History The effect was discovered in 1819 by Jan Evangelista Purkyně. Purkyně was a polymath who would often meditate at dawn during long walks in the blossomed Bohemian fields. Purkyně noticed that his favorite flowers appeared bright red on a sunny afternoon, while at dawn they looked very dark. He reasoned that the eye has not one but two systems adapted to see colors, one for bright overall light intensity, and the other for dusk and dawn. Purkyně wrote in his Neue Beiträge: Objectively, the degree of illumination has a great influence on the intensity of color quality. In order to prove this most vividly, take some colors before daybreak, when it begins slowly to get lighter. Initially one sees only black and grey. Particularly the brightest colors, red and green, appear darkest. Yellow cannot be distinguished from a rosy red. Blue became noticeable to me first. Nuances of red, which otherwise burn brightest in daylight, namely carmine, cinnabar and orange, show themselves as darkest for quite a while, in contrast to their average brightness. Green appears more bluish to me, and its yellow tint develops with increasing daylight only.
Biology and health sciences
Visual system
Biology
1706901
https://en.wikipedia.org/wiki/One-pot%20synthesis
One-pot synthesis
In chemistry a one-pot synthesis is a strategy to improve the efficiency of a chemical reaction in which a reactant is subjected to successive chemical reactions in just one reactor. This is much desired by chemists because avoiding a lengthy separation process and purification of the intermediate chemical compounds can save time and resources while increasing chemical yield. An example of a one-pot synthesis is the total synthesis of tropinone or the Gassman indole synthesis. Sequential one-pot syntheses can be used to generate even complex targets with multiple stereocentres, such as oseltamivir, which may significantly shorten the number of steps required overall and have important commercial implications. A sequential one-pot synthesis with reagents added to a reactor one at a time and without work-up is also called a telescoping synthesis. In one such procedure the reaction of 3-N-tosylaminophenol I with acrolein II affords a hydroxyl substituted quinoline III through 4 sequential steps without workup of the intermediate products (see image). The addition of acrolein (blue) is a Michael reaction catalyzed by N,N-diisopropylamine, the presence of ethanol converts the aldehyde group to an acetal but this process is reversed when hydrochloric acid is introduced (red). The enolate reacts as an electrophile in a Friedel-Crafts reaction with ring-closure. The alcohol group is eliminated in presence of potassium hydroxide (green) and when in the final step the reaction medium is neutralized to pH 7 (magenta) the tosyl group is eliminated as well.
Physical sciences
Synthetic strategies
Chemistry
1707053
https://en.wikipedia.org/wiki/Habitat%20destruction
Habitat destruction
Habitat destruction (also termed habitat loss and habitat reduction) occurs when a natural habitat is no longer able to support its native species. The organisms once living there have either moved elsewhere, or are dead, leading to a decrease in biodiversity and species numbers. Habitat destruction is in fact the leading cause of biodiversity loss and species extinction worldwide. Humans contribute to habitat destruction through the use of natural resources, agriculture, industrial production and urbanization (urban sprawl). Other activities include mining, logging and trawling. Environmental factors can contribute to habitat destruction more indirectly. Geological processes, climate change, introduction of invasive species, ecosystem nutrient depletion, water and noise pollution are some examples. Loss of habitat can be preceded by an initial habitat fragmentation. Fragmentation and loss of habitat have become one of the most important topics of research in ecology as they are major threats to the survival of endangered species. Observations By region Biodiversity hotspots are chiefly tropical regions that feature high concentrations of endemic species and, when all hotspots are combined, may contain over half of the world's terrestrial species. These hotspots are suffering from habitat loss and destruction. Most of the natural habitat on islands and in areas of high human population density has already been destroyed (WRI, 2003). Islands suffering extreme habitat destruction include New Zealand, Madagascar, the Philippines, and Japan. South and East Asia—especially China, India, Malaysia, Indonesia, and Japan—and many areas in West Africa have extremely dense human populations that allow little room for natural habitat. Marine areas close to highly populated coastal cities also face degradation of their coral reefs or other marine habitat. Forest City, a township in southern Malaysia built on Environmentally Sensitive Area (ESA) Rank 1 wetland is one such example, with irreversible reclamation proceeding prior to environmental impact assessments and approvals. Other such areas include the eastern coasts of Asia and Africa, northern coasts of South America, and the Caribbean Sea and its associated islands. Regions of unsustainable agriculture or unstable governments, which may go hand-in-hand, typically experience high rates of habitat destruction. South Asia, Central America, Sub-Saharan Africa, and the Amazonian tropical rainforest areas of South America are the main regions with unsustainable agricultural practices and/or government mismanagement. Areas of high agricultural output tend to have the highest extent of habitat destruction. In the U.S., less than 25% of native vegetation remains in many parts of the East and Midwest. Only 15% of land area remains unmodified by human activities in all of Europe. Currently, changes occurring in different environments around the world are changing the specific geographical habitats that are suitable for plants to grow. Therefore, the ability for plants to migrate to suitable environment areas will have a strong impact on the distribution of plant diversity. However, at the moment, the rates of plant migration that are influenced by habitat loss and fragmentation are not as well understood as they could be. By type of ecosystem Tropical rainforests have received most of the attention concerning the destruction of habitat. From the approximately 16 million square kilometers of tropical rainforest habitat that originally existed worldwide, less than 9 million square kilometers remain today. The current rate of deforestation is 160,000 square kilometers per year, which equates to a loss of approximately 1% of original forest habitat each year. Other forest ecosystems have suffered as much or more destruction as tropical rainforests. Deforestation for farming and logging have severely disturbed at least 94% of temperate broadleaf forests; many old growth forest stands have lost more than 98% of their previous area because of human activities. Tropical deciduous dry forests are easier to clear and burn and are more suitable for agriculture and cattle ranching than tropical rainforests; consequently, less than 0.1% of dry forests in Central America's Pacific Coast and less than 8% in Madagascar remain from their original extents. Plains and desert areas have been degraded to a lesser extent. Only 10–20% of the world's drylands, which include temperate grasslands, savannas, and shrublands, scrub, and deciduous forests, have been somewhat degraded. But included in that 10–20% of land is the approximately 9 million square kilometers of seasonally dry-lands that humans have converted to deserts through the process of desertification. The tallgrass prairies of North America, on the other hand, have less than 3% of natural habitat remaining that has not been converted to farmland. Wetlands and marine areas have endured high levels of habitat destruction. More than 50% of wetlands in the U.S. have been destroyed in just the last 200 years. Between 60% and 70% of European wetlands have been completely destroyed. In the United Kingdom, there has been an increase in demand for coastal housing and tourism which has caused a decline in marine habitats over the last 60 years. The rising sea levels and temperatures have caused soil erosion, coastal flooding, and loss of quality in the UK marine ecosystem. About one-fifth (20%) of marine coastal areas have been highly modified by humans. One-fifth of coral reefs have also been destroyed, and another fifth has been severely degraded by overfishing, pollution, and invasive species; 90% of the Philippines' coral reefs alone have been destroyed. Finally, over 35% of the mangrove ecosystems worldwide have been destroyed. Natural causes Habitat destruction through natural processes such as volcanism, fire, and climate change is well documented in the fossil record. One study shows that habitat fragmentation of tropical rainforests in Euramerica 300 million years ago led to a great loss of amphibian diversity, but simultaneously the drier climate spurred on a burst of diversity among reptiles. Causes due to human activities Habitat destruction caused by humans includes land conversion from forests, etc. to arable land, urban sprawl, infrastructure development, and other anthropogenic changes to the characteristics of land. Habitat degradation, fragmentation, and pollution are aspects of habitat destruction caused by humans that do not necessarily involve over destruction of habitat, yet result in habitat collapse. Desertification, deforestation, and coral reef degradation are specific types of habitat destruction for those areas (deserts, forests, coral reefs). Overarching drivers The forces that cause humans to destroy habitat are known as drivers of habitat destruction. Demographic, economic, sociopolitical, scientific and technological, and cultural drivers all contribute to habitat destruction. Demographic drivers include the expanding human population; rate of population increase over time; spatial distribution of people in a given area (urban versus rural), ecosystem type, and country; and the combined effects of poverty, age, family planning, gender, and education status of people in certain areas. Most of the exponential human population growth worldwide is occurring in or close to biodiversity hotspots. This may explain why human population density accounts for 87.9% of the variation in numbers of threatened species across 114 countries, providing indisputable evidence that people play the largest role in decreasing biodiversity. The boom in human population and migration of people into such species-rich regions are making conservation efforts not only more urgent but also more likely to conflict with local human interests. The high local population density in such areas is directly correlated to the poverty status of the local people, most of whom lacking an education and family planning. According to the Geist and Lambin (2002) study, the underlying driving forces were prioritized as follows (with the percent of the 152 cases the factor played a significant role in): economic factors (81%), institutional or policy factors (78%), technological factors (70%), cultural or socio-political factors (66%), and demographic factors (61%). The main economic factors included commercialization and growth of timber markets (68%), which are driven by national and international demands; urban industrial growth (38%); low domestic costs for land, labor, fuel, and timber (32%); and increases in product prices mainly for cash crops (25%). Institutional and policy factors included formal pro-deforestation policies on land development (40%), economic growth including colonization and infrastructure improvement (34%), and subsidies for land-based activities (26%); property rights and land-tenure insecurity (44%); and policy failures such as corruption, lawlessness, or mismanagement (42%). The main technological factor was the poor application of technology in the wood industry (45%), which leads to wasteful logging practices. Within the broad category of cultural and sociopolitical factors are public attitudes and values (63%), individual/household behavior (53%), public unconcern toward forest environments (43%), missing basic values (36%), and unconcern by individuals (32%). Demographic factors were the in-migration of colonizing settlers into sparsely populated forest areas (38%) and growing population density—a result of the first factor—in those areas (25%). Forest conversion to agriculture Geist and Lambin (2002) assessed 152 case studies of net losses of tropical forest cover to determine any patterns in the proximate and underlying causes of tropical deforestation. Their results, yielded as percentages of the case studies in which each parameter was a significant factor, provide a quantitative prioritization of which proximate and underlying causes were the most significant. The proximate causes were clustered into broad categories of agricultural expansion (96%), infrastructure expansion (72%), and wood extraction (67%). Therefore, according to this study, forest conversion to agriculture is the main land use change responsible for tropical deforestation. The specific categories reveal further insight into the specific causes of tropical deforestation: transport extension (64%), commercial wood extraction (52%), permanent cultivation (48%), cattle ranching (46%), shifting (slash and burn) cultivation (41%), subsistence agriculture (40%), and fuel wood extraction for domestic use (28%). One result is that shifting cultivation is not the primary cause of deforestation in all world regions, while transport extension (including the construction of new roads) is the largest single proximate factor responsible for deforestation. Habitat size and numbers of species are systematically related. Physically larger species and those living at lower latitudes or in forests or oceans are more sensitive to reduction in habitat area. Conversion to "trivial" standardized ecosystems (e.g., monoculture following deforestation) effectively destroys habitat for the more diverse species. Even the simplest forms of agriculture affect diversity – through clearing or draining the land, discouraging weeds and pests, and encouraging just a limited set of domesticated plant and animal species. There are also feedbacks and interactions among the proximate and underlying causes of deforestation that can amplify the process. Road construction has the largest feedback effect, because it interacts with—and leads to—the establishment of new settlements and more people, which causes a growth in wood (logging) and food markets. Growth in these markets, in turn, progresses the commercialization of agriculture and logging industries. When these industries become commercialized, they must become more efficient by utilizing larger or more modern machinery that often has a worse effect on the habitat than traditional farming and logging methods. Either way, more land is cleared more rapidly for commercial markets. This common feedback example manifests just how closely related the proximate and underlying causes are to each other. Climate change Climate change contributes to destruction of some habitats, endangering various species. For example: Climate change causes rising sea levels which will threaten natural habitats and species globally. Melting sea ice destroys habitat for some species. For example, the decline of sea ice in the Arctic has been accelerating during the early twenty‐first century, with a decline rate of 4.7% per decade (it has declined over 50% since the first satellite records). One well known example of a species affected is the polar bear, whose habitat in the Arctic is threatened. Algae can also be affected when it grows on the underside of sea ice. Warm-water coral reefs are very sensitive to global warming and ocean acidification. Coral reefs provide a habitat for thousands of species. They provide ecosystem services such as coastal protection and food. But 70–90% of today's warm-water coral reefs will disappear even if warming is kept to . For example, Caribbean coral reefswhich are biodiversity hotspotswill be lost within the century if global warming continues at the current rate. Habitat fragmentation Impacts On animals and plants When a habitat is destroyed, the carrying capacity for indigenous plants, animals, and other organisms is reduced so that populations decline, sometimes up to the level of extinction. Habitat loss is perhaps the greatest threat to organisms and biodiversity. Temple (1986) found that 82% of endangered bird species were significantly threatened by habitat loss. Most amphibian species are also threatened by native habitat loss, and some species are now only breeding in modified habitat. Endemic organisms with limited ranges are most affected by habitat destruction, mainly because these organisms are not found anywhere else in the world, and thus have less chance of recovering. Many endemic organisms have very specific requirements for their survival that can only be found within a certain ecosystem, resulting in their extinction. Extinction may also take place very long after the destruction of habitat, a phenomenon known as extinction debt. Habitat destruction can also decrease the range of certain organism populations. This can result in the reduction of genetic diversity and perhaps the production of infertile youths, as these organisms would have a higher possibility of mating with related organisms within their population, or different species. One of the most famous examples is the impact upon China's giant panda, once found in many areas of Sichuan. Now it is only found in fragmented and isolated regions in the southwest of the country, as a result of widespread deforestation in the 20th century. As habitat destruction of an area occurs, the species diversity offsets from a combination of habitat generalists and specialists to a population primarily consisting of generalist species. Invasive species are frequently generalists that are able to survive in much more diverse habitats. Habitat destruction leading to climate change offsets the balance of species keeping up with the extinction threshold leading to a higher likelihood of extinction. Habitat loss is one of the main environmental causes of the decline of biodiversity on local, regional, and global scales. Many believe that habitat fragmentation is also a threat to biodiversity however some believe that it is secondary to habitat loss. The reduction of the amount of habitat available results in specific landscapes that are made of isolated patches of suitable habitat throughout a hostile environment/matrix. This process is generally due to pure habitat loss as well as fragmentation effects. Pure habitat loss refers to changes occurring in the composition of the landscape that causes a decrease in individuals. Fragmentation effects refer to an addition of effects occurring due to the habitat changes. Habitat loss can result in negative effects on the dynamic of species richness. The order Hymenoptera is a diverse group of plant pollinators who are highly susceptible to the negative effects of habitat loss, this could result in a domino effect between the plant-pollinator interactions leading to major conservation implications within this group. It is observed from the worlds longest running fragmentation experiment over 35 years that habitat fragmentation has caused a decrease in biodiversity from 13% to 75%. On human population Habitat destruction can vastly increase an area's vulnerability to natural disasters like flood and drought, crop failure, spread of disease, and water contamination. On the other hand, a healthy ecosystem with good management practices can reduce the chance of these events happening, or will at least mitigate adverse impacts. Eliminating swamps—the habitat of pests such as mosquitoes—has contributed to the prevention of diseases such as malaria. Completely depriving an infectious agent (such as a virus) of its habitat—by vaccination, for example—can result in eradicating that infectious agent. Agricultural land can suffer from the destruction of the surrounding landscape. Over the past 50 years, the destruction of habitat surrounding agricultural land has degraded approximately 40% of agricultural land worldwide via erosion, salinization, compaction, nutrient depletion, pollution, and urbanization. Humans also lose direct uses of natural habitat when habitat is destroyed. Aesthetic uses such as birdwatching, recreational uses like hunting and fishing, and ecotourism usually rely upon relatively undisturbed habitat. Many people value the complexity of the natural world and express concern at the loss of natural habitats and of animal or plant species worldwide. Probably the most profound impact that habitat destruction has on people is the loss of many valuable ecosystem services. Habitat destruction has altered nitrogen, phosphorus, sulfur, and carbon cycles, which has increased the frequency and severity of acid rain, algal blooms, and fish kills in rivers and oceans and contributed tremendously to global climate change. One ecosystem service whose significance is becoming better understood is climate regulation. On a local scale, trees provide windbreaks and shade; on a regional scale, plant transpiration recycles rainwater and maintains constant annual rainfall; on a global scale, plants (especially trees in tropical rainforests) around the world counter the accumulation of greenhouse gases in the atmosphere by sequestering carbon dioxide through photosynthesis. Other ecosystem services that are diminished or lost altogether as a result of habitat destruction include watershed management, nitrogen fixation, oxygen production, pollination (see pollinator decline), waste treatment (i.e., the breaking down and immobilization of toxic pollutants), and nutrient recycling of sewage or agricultural runoff. The loss of trees from tropical rainforests alone represents a substantial diminishing of Earth's ability to produce oxygen and to use up carbon dioxide. These services are becoming even more important as increasing carbon dioxide levels is one of the main contributors to global climate change. The loss of biodiversity may not directly affect humans, but the indirect effects of losing many species as well as the diversity of ecosystems in general are enormous. When biodiversity is lost, the environment loses many species that perform valuable and unique roles in the ecosystem. The environment and all its inhabitants rely on biodiversity to recover from extreme environmental conditions. When too much biodiversity is lost, a catastrophic event such as an earthquake, flood, or volcanic eruption could cause an ecosystem to crash, and humans would obviously suffer from that. Loss of biodiversity also means that humans are losing animals that could have served as biological-control agents and plants that could potentially provide higher-yielding crop varieties, pharmaceutical drugs to cure existing or future diseases (such as cancer), and new resistant crop-varieties for agricultural species susceptible to pesticide-resistant insects or virulent strains of fungi, viruses, and bacteria. The negative effects of habitat destruction usually impact rural populations more directly than urban populations. Across the globe, poor people suffer the most when natural habitat is destroyed, because less natural habitat means fewer natural resources per capita, yet wealthier people and countries can simply pay more to continue to receive more than their per capita share of natural resources. Another way to view the negative effects of habitat destruction is to look at the opportunity cost of destroying a given habitat. In other words, what do people lose out on with the removal of a given habitat? A country may increase its food supply by converting forest land to row-crop agriculture, but the value of the same land may be much larger when it can supply natural resources or services such as clean water, timber, ecotourism, or flood regulation and drought control. Outlook The rapid expansion of the global human population is increasing the world's food requirement substantially. Simple logic dictates that more people will require more food. In fact, as the world's population increases dramatically, agricultural output will need to increase by at least 50%, over the next 30 years. In the past, continually moving to new land and soils provided a boost in food production to meet the global food demand. That easy fix will no longer be available, however, as more than 98% of all land suitable for agriculture is already in use or degraded beyond repair. The impending global food crisis will be a major source of habitat destruction. Commercial farmers are going to become desperate to produce more food from the same amount of land, so they will use more fertilizers and show less concern for the environment to meet the market demand. Others will seek out new land or will convert other land-uses to agriculture. Agricultural intensification will become widespread at the cost of the environment and its inhabitants. Species will be pushed out of their habitat either directly by habitat destruction or indirectly by fragmentation, degradation, or pollution. Any efforts to protect the world's remaining natural habitat and biodiversity will compete directly with humans' growing demand for natural resources, especially new agricultural lands. Solutions Attempts to address habitat destruction are in international policy commitments embodied by Sustainable Development Goal 15 "Life on Land" and Sustainable Development Goal 14 "Life Below Water". However, the United Nations Environment Programme report on "Making Peace with Nature" released in 2021 found that most of these efforts had failed to meet their internationally agreed upon goals. Tropical deforestation: In most cases of tropical deforestation, three to four underlying causes are driving two to three proximate causes. This means that a universal policy for controlling tropical deforestation would not be able to address the unique combination of proximate and underlying causes of deforestation in each country. Before any local, national, or international deforestation policies are written and enforced, governmental leaders must acquire a detailed understanding of the complex combination of proximate causes and underlying driving forces of deforestation in a given area or country. This concept, along with many other results of tropical deforestation from the Geist and Lambin study, can easily be applied to habitat destruction in general. Shoreline erosion: Coastal erosion is a natural process as storms, waves, tides and other water level changes occur. Shoreline stabilization can be done by barriers between land and water such as seawalls and bulkheads. Living shorelines are gaining attention as a new stabilization method. These can reduce damage and erosion while simultaneously providing ecosystem services such as food production, nutrient and sediment removal, and water quality improvement to society Preventing an area from losing its specialist species to generalist invasive species depends on the extent of the habitat destruction that has already taken place. In areas where the habitat is relatively undisturbed, halting further habitat destruction may be enough. In areas where habitat destruction is more extreme (fragmentation or patch loss), restoration ecology may be needed. Education of the general public is possibly the best way to prevent further human habitat destruction. Changing the dull creep of environmental impacts from being viewed as acceptable to being seen a reason for change to more sustainable practices. Education about the necessity of family planning to slow population growth is important as greater population leads to greater human caused habitat destruction. Habitat restoration can also take place through the following processes; extending habitats or repairing habitats. Extending habitats aims to counteract habitat loss and fragmentation whereas repairing habitats counteracts degradation. The preservation and creation of habitat corridors can link isolated populations and increase pollination. Corridors are also known to reduce the negative impacts of habitat destruction. The biggest potential to solving the issue of habitat destruction comes from solving the political, economical and social problems that go along with it such as, individual and commercial material consumption, sustainable extraction of resources, conservation areas, restoration of degraded land and addressing climate change. Governmental leaders need to take action by addressing the underlying driving forces, rather than merely regulating the proximate causes. In a broader sense, governmental bodies at a local, national, and international scale need to emphasize: Considering the irreplaceable ecosystem services provided by natural habitats. Protecting remaining intact sections of natural habitat. Finding ecological ways to increase agricultural output without increasing the total land in production. Reducing human population and expansion. Apart from improving access to contraception globally, furthering gender equality also has a great benefit. When women have the same education (decision-making power), this generally leads to smaller families. It is argued that the effects of habitat loss and fragmentation can be counteracted by including spatial processes in potential restoration management plans. However, even though spatial dynamics are incredibly important in the conservation and recovery of species, a limited amount of management plans are taking the spatial effects of habitat restoration and conservation into consideration.
Biology and health sciences
Ecology
Biology
1708899
https://en.wikipedia.org/wiki/Ichthyornis
Ichthyornis
Ichthyornis (meaning "fish bird", after its fish-like vertebrae) is an extinct genus of toothy seabird-like ornithuran from the late Cretaceous period of North America. Its fossil remains are known from the chalks of Alberta, Alabama, Kansas (Greenhorn Limestone), New Mexico, Saskatchewan, and Texas, in strata that were laid down in the Western Interior Seaway during the Turonian through Campanian ages, about 95–83.5 million years ago. Ichthyornis is a common component of the Niobrara Formation fauna, and numerous specimens have been found. Ichthyornis has been historically important in shedding light on bird evolution. It was the first known prehistoric bird relative preserved with teeth, and Charles Darwin noted its significance during the early years of the theory of evolution. Ichthyornis remains important today as it is one of the few Mesozoic era ornithurans known from more than a few specimens. Description of the Ichthyornis It is thought that Ichthyornis was the Cretaceous ecological equivalent of modern seabirds such as gulls, petrels, and skimmers. An average specimen was the size of a pigeon, long, with a skeletal wingspan (not taking feathers into account) of around , though there is considerable size variation among known specimens, with some smaller and some much larger than the type specimen of I. dispar. Ichthyornis is notable primarily for its combination of vertebrae which are concave both in front and back (similar to some fish, which is where it gets its name) and several more subtle features of its skeleton which set it apart from its close relatives. Ichthyornis is perhaps most well known for its teeth. The teeth were present only in the middle portion of the upper and lower jaws. The jaw tips had no teeth and were covered in a beak. The beak of Ichthyornis, like the hesperornithids, was compound and made up of several distinct plates, similar to the beak of an albatross, rather than a single sheet of keratin as in most modern birds. The teeth were more flattened than the rounded teeth found in crocodilians, though they became wider towards the base of the crown. The tips of the teeth were curved backward and lacked any serrations. They were arranged in a groove, much like those of marine reptiles. The wings and breastbone were very modern in appearance, suggesting strong flight ability and placing it with modern birds in the advanced group Carinatae. Unlike earlier avialans such as the enantiornithines, the species appears to have matured to adulthood in a rather short, continuous process. A study on an Ichthyornis endocast reveals that it had a relatively basal brain compared to modern birds, similar to that of Archaeopteryx and other non-avian theropods. Conversely, it had a palate remarkably convergent with that of modern neognaths. Timespan and evolution Ichthyornis fossils have been found in almost all levels of the Niobrara Chalk, from beds dating to the late Coniacian age (about 89 million years ago) to the Campanian age (about 83.5 million years ago). Even earlier remains attributed to Ichthyornis have been found in the Greenhorn Formation of Kansas, dating to the early Turonian age (about 93 million years ago). Specimens of Ichthyornis from earlier eras were, on average, smaller than later ones. The holotype specimen of Ichthyornis dispar, YPM 1450, had a humerus about long. In many geologically younger specimens like YPM 1742, the same wing bone was long. Both the older, smaller specimens, and the more recent, larger specimens show signs that they had reached skeletal maturity and were adults, and came from the same geographic area. It is likely that Ichthyornis dispar as a species increased in size over the several million years it inhabited the Western Interior Seaway ecosystem. History of study Ichthyornis was one of the first Mesozoic avialans ever found and the first one known to have had teeth, making it an important discovery in the early history of paleontology. It remains important today, as it represents one of the closest non-avian relatives of modern birds, and one of a handful of Mesozoic bird relatives represented by numerous specimens. Ichthyornis was discovered in 1870 by Benjamin Franklin Mudge, a professor from Kansas State Agricultural College who recovered the initial fossils from the North Fork of the Solomon River in Kansas, United States. Mudge was a prolific fossil collector who shipped his discoveries to prominent scientists for study. Mudge had previously had a close partnership with paleontologist Edward Drinker Cope of the Academy of Natural Sciences in Philadelphia. However, as described by S.W. Williston in 1898, Mudge was soon contacted by Othniel Charles Marsh, Cope's rival in the so-called Bone Wars, a rush to collect and identify fossils in the American West. Marsh wrote to Mudge in 1872 and offered to identify any important fossils free of charge, and to give Mudge sole credit for their discovery. Marsh had been a friend of Mudge when they were younger, so when Mudge learned of Marsh's request, he changed the address on the shipping crate containing the Ichthyornis specimen (which had already been addressed to Cope and was ready to be sent), and shipped it to Marsh instead. Marsh had narrowly won the prestige of studying and naming the important fossil at the expense of his rival. However, Marsh did not initially recognize the true importance of the fossil. Soon after receiving it, he reported back to Mudge his opinion that the chalk slab contained the bones of two distinct animals: a small bird animal, and the toothed jaws of some unknown reptile. Marsh considered the unusual vertebrae of the bird to resemble those of a fish, so he named it Ichthyornis, or "fish bird." Later in 1872, Marsh described the toothed jaws as a new species of marine reptile, named Colonosaurus mudgei after their discoverer. The similarity of the lower jaw and teeth to those of mosasaurs is so great that as late as 1952, J.T. Gregory argued that it really belonged to a diminutive species or young individual related to the genus Clidastes. By early in 1873, Marsh had recognized his error. Through further preparation and exposure of skull bones from the rock, he found that the toothed jaws must have come from the bird itself and not a marine reptile. Due to the previously unknown features of Ichthyornis (vertebrae concave on either side and teeth), Marsh chose to classify it in an entirely new sub-class of birds he called the Odontornithes (or "toothed birds"), and in the new order Ichthyornithes (later Ichthyornithiformes). The only other bird Marsh included in these groups was the newly named Apatornis, which he had previously named as a species of Ichthyornis, I. celer. Mudge later noted the rare and unique quality of these toothed birds (including Hesperornis, which was found to also have teeth by 1877), and the irony of their association with the remains of toothless pterosaurs, flying reptiles which were only known to have had teeth in other regions of the world at that time. Soon after these discoveries, Ichthyornis was recognized for its significance to the theory of evolution recently published by Charles Darwin. Darwin himself told Marsh in an 1880 letter that Ichthyornis and Hesperornis offered "the best support for the theory of evolution" since he had first published On the Origin of Species in 1859. (While Archaeopteryx was the first known Mesozoic avialan and is now known to have also had teeth, the first specimen with a skull was not described until 1884). Others at the time also recognized the implications of a nearly modern bird with reptilian teeth, and feared the controversy it caused. One Yale student described various men and women urging Marsh to conceal Ichthyornis from the public because it lent too much support to evolutionary theory. Many accused Marsh of having tampered with the fossils or intentionally created a hoax by associating reptilian jaws with the body of a bird, accusations that continued to surface even as late as 1967. However, an overwhelming majority of researchers have demonstrated that Marsh's interpretation of the fossils was correct, and he was fully vindicated by later finds. Mounted specimens At the turn of the 20th century, the Peabody Museum of Natural History at Yale University, where most Ichthyornis specimens were housed, began placing many of its most interesting or important specimens on display in the museum's Great Hall. Two panel mounts (that is, pieces where the skeleton is arranged and set into a plaster slab) were created for Ichthyornis; one for I. dispar, and one for "I. victor". Both were created by Hugh Gibb, who prepared many of Marsh's fossils for study and display. The I. dispar mount contained only the holotype fossils, while the "I. victor" mount was a composite incorporating a variety of different specimens to make the piece appear more complete (it did not, however, contain any part of the actual "I. victor" holotype specimen). At some point before 1937, the catalogue number of the actual "I. victor" type specimen was mistakenly reassigned to the panel mount. Later reports of the specimen, even by the Peabody Museum's staff, therefore mistakenly stated that the original "I. victor" specimen comprised most of the skeleton, when it was in fact only three bones. By 1997, the situation had become so confused that Jacques Gauthier, the current curator of the museum's vertebrate paleontology collection, authorized the dismantling of both panel mounts. This allowed the bones to be properly sorted out and studied in three dimensions, which had been impossible previously when they were embedded in plaster. A full re-description of these specimens was published by paleontologist Julia Clarke in 2004. Classification Ichthyornis is close to the ancestry of modern birds, the Aves, but represents an independent lineage. It was long believed that it was closely related to some other Cretaceous taxa known from very fragmentary remains – Ambiortus, Apatornis, Iaceornis and Guildavis – but these seem to be closer to the ancestors of modern birds than to Ichthyornis dispar. In Clarke's 2004 review, the former order Ichthyornithiformes and the family Ichthyornithidae are now superseded by the clade Ichthyornithes, which in the paper was also defined according to phylogenetic taxonomy as all descendants of the most recent common ancestor of Ichthyornis dispar and modern birds. Of the several described species, only one, Ichthyornis dispar, is currently recognized, following the seminal review by Julia Clarke. Marsh had previously named a specimen now attributed to I. dispar as Graculavus anceps. Clarke argued that because the rules for naming animals laid out by the ICZN state that a type species for a genus must have originally been included in that genus, Ichthyornis anceps is ineligible to replace I. dispar as the type species and so must be considered a junior synonym even though it was named first. However, Michael Mortimer pointed out that this is incorrect; while I. anceps cannot become the type species of Ichthyornis, the ICZN does not preclude it from becoming the senior synonym of the type species I. dispar. Therefore, I. anceps should have been considered the correct name for the only recognized Ichthyornis species. All other supposed species of Ichthyornis have not been supported as valid. The presumed "Ichthyornis" lentos, for example, actually belongs into the early galliform genus Austinornis. "Ichthyornis" minusculus from the Bissekty Formation (Late Cretaceous) of Kyzyl Kum, Uzbekistan, is probably an enantiornithine. All other Ichthyornis species are synonymous with I. dispar. The cladogram below is the result of a 2014 analysis by Michael Lee and colleagues that expanded on data from an earlier study by O’Connor & Zhou in 2012. The clade names are positioned based on their definitions.
Biology and health sciences
Prehistoric birds
Animals
1708917
https://en.wikipedia.org/wiki/Hesperornis
Hesperornis
Hesperornis (meaning "western bird") is a genus of cormorant-like Ornithuran that spanned throughout the Campanian age, and possibly even up to the early Maastrichtian age, of the Late Cretaceous period. One of the lesser-known discoveries of the paleontologist O. C. Marsh in the late 19th century Bone Wars, it was an early find in the history of avian paleontology. Locations for Hesperornis fossils include the Late Cretaceous marine limestones from Kansas and the marine shales from Canada. Nine species are recognised, eight of which have been recovered from rocks in North America and one from Russia. Description Hesperornis was a large bird, measuring about long and weighing around . It had virtually no wings, and swam with its powerful hind legs. Studies on the feet initially indicated that Hesperornis and kin had lobed toes similar to modern-day grebes, as opposed to webbed toes as seen in most aquatic birds such as loons. More recent work looking at the morphometrics of the feet in hesperornithiformes and modern sea birds has thrown this interpretation into question, making webbed toes equally as likely as lobed toes for this group. Like many other Mesozoic birds such as Ichthyornis, Hesperornis had teeth as well as a beak. In the hesperornithiform lineage they were of a different arrangement than in any other known bird (or in non-avian theropod dinosaurs), with the teeth sitting in a longitudinal groove rather than in individual sockets, in a notable case of convergent evolution with mosasaurs. The teeth of Hesperornis were present along nearly the entire lower jaw (dentary) and the back of the upper jaw (maxilla). The front portion of the upper jaw (premaxilla) and tip of the lower jaw (predentary) lacked teeth and were probably covered in a beak. Studies of the bone surface show that at least the tips of the jaws supported a hard, keratinous beak similar to that found in modern birds. The palate (mouth roof) contained small pits that allowed the lower teeth to lock into place when the jaws were closed. They also retained a primitive-like joint between the lower jaw bones. It is believed that this allowed them to rotate the back portion of the mandible independently of the front, thus allowing the lower teeth to disengage. History The first Hesperornis specimen was discovered in 1871 by Othniel Charles Marsh. Marsh was undertaking his second western expedition, accompanied by ten students. The team headed to Kansas where Marsh had dug before. Aside from finding more bones belonging to the flying reptile Pteranodon, Marsh discovered the skeleton of a "large fossil bird, at least five feet in height". The specimen was large, wingless, and had strong legs—Marsh considered it a diving species. Unfortunately, the specimen lacked a head. Marsh named the find Hesperornis regalis, or "regal western bird". Marsh headed back west with a smaller party the following year. In western Kansas, one of Marsh's four students, Thomas H. Russell, discovered a "nearly perfect skeleton" of Hesperornis. This specimen had enough of its head intact that Marsh could see that the creature's jaws had been lined with teeth. Marsh saw important evolutionary implications of this find, along with Benjamin Mudge's find of the toothed bird Ichthyornis. In an 1873 paper Marsh declared that "the fortunate discovery of these interesting fossils does much to break down the old distinction between Birds and Reptiles". Meanwhile, Marsh's relationship with his rival Edward Drinker Cope soured further after Cope accidentally received boxes of fossils, including the toothed birds, that were meant for Marsh. Cope called the birds "simply delightful", but Marsh replied with accusations Cope had stolen the bones. By 1873 their friendship dissolved into open hostility, helping to spark the Bone Wars. While Marsh would rarely go into the field after 1873, the collectors he paid continued to send him a stream of fossils. He eventually received parts of 50 specimens of Hesperornis, which allowed him to make a much stronger demonstration of an evolutionary link between reptiles and birds than had been possible before. Classification and Species Many species have been described in this genus, though some are known from very few bones or even a single bone and cannot be properly compared with the more plentiful (but also incomplete) remains of other similar-sized taxa. In many cases, species have been separated by provenance, having been found in strata of different ages or in different locations, or by differences in size. The first species to be described, the type species, is Hesperornis regalis. H. regalis is also the best known species, and dozens of specimens (from fragments to more complete skeletons) have been recovered, all from the Smoky Hill Chalk Member of the Niobrara Formation (dating to the early Campanian age, between 90 and 60 million years ago). It is the only species of Hesperornis for which a nearly complete skull is known. Hesperornis crassipes was named in 1876 by Marsh, who initially classified it in a different genus as Lestornis crassipes. H. crassipes was larger than H. regalis, had five ribs as opposed to four in the first species, and differed in aspects of the bone sculpturing on the breastbone and lower leg. H. crassipes is known from the same time and place as H. regalis. One incomplete skeleton is known, including teeth and parts of the skull. Marsh explicitly named his second species of Hesperornis in 1876 for an incomplete metatarsus recovered from the same layers of the Niobrara chalk as H. regalis. He named this smaller species H. gracilis, and it was subsequently involved in the rather confused taxonomy of a specimen which would eventually form the basis of the new genus and species Parahesperornis alexi. The type specimen of P. alexi was assumed to belong to the same specimen as that of H. gracilis, so when Lucas (1903) decided that the former specimen represented a distinct genus, he mistakenly used the later specimen to anchor it, creating the name Hargeria gracilis. This mistake was rectified by later authors, who sank Hargeria back into Hesperornis and renamed the more distinctive specimen Parahesperornis. The first species recognized from outside the Niobrara chalk, Hesperornis altus, lived about 78 million years ago in Montana, and is known from a partial lower leg from the base of the freshwater Judith River Formation (or, possibly, the top of the underlying, marine Claggett Shale formation). While initially placed in the new genus Coniornis by Marsh, this was due mostly to his belief that Hesperornis existed only in Kansas, so any species from Montana should be placed in a different genus. Most later researchers disagreed with this, and have placed Coniornis altus in the same genus as Hesperornis as H. altus. A second species from Montana has also been described from the Claggett Shale. H. montana was named by Shufeldt in 1915, and while its known material (a single dorsal vertebra) cannot be directly compared to H. altus, Shufeldt and others have considered it distinct due to its apparently smaller size. In 1993, the first Hesperornis remains from outside of North America were recognized as a new species by Nessov and Yarkov. They named Hesperornis rossicus for a fragmentary skeleton from the early Campanian of Russia near Volgograd. Several other specimens from contemporary deposits have since been referred to this species. At about long, H. rossicus was the largest species of Hesperornis and among the largest hesperornithines, slightly smaller than the large Canadian genus Canadaga. Aside from its large size and different geographic location, H. rossicus differs from other Hesperornis in several features of the lower leg and foot, including a highly flattened metatarsus. In 2002, Martin and Lim formally recognized several new species for remains that had previously been unstudied or referred without consideration to previously named North American hesperornithines. These include the very small H. mengeli and H. macdonaldi, the slightly larger H. bairdi, and the very large H. chowi, all from the Sharon Springs member of the Pierre Shale Formation in South Dakota and Alberta, 80.5 million years ago. In addition, there are some unassigned remains, such as SGU 3442 Ve02 and LO 9067t and bones of an undetermined species from Tzimlyanskoe Reservoir near Rostov. The former two bones are probably H. rossicus; some remains assigned to that species in turn seem to belong to the latter undetermined taxon. It is also suggested that Hesperornis likely lived throughout the Campanian age based on remains found on middle to late Campanian age rocks, and possibly even up to the early Maastrichtian age. Relationships In 2015, a species-level phylogenetic analysis found the following relationships among hesperornitheans. Paleobiology Hesperornis was primarily marine, and lived in the waters of such contemporary shallow shelf seas as the Western Interior Seaway, the Turgai Strait, and the North Sea, which then were subtropical to tropical waters, much warmer than today. However, some of the youngest known specimens of Hesperornis have been found in inland freshwater deposits of the Foremost Formation, suggesting that some species of Hesperornis may have eventually moved, at least partially, away from a primarily marine habitat. Additionally, the species H. altus comes from the freshwater deposits at the base of the Judith River Formation. Traditionally, Hesperornis is depicted with a mode of locomotion similar to modern loons or grebes, and study of their limb proportions and hip structure has borne out this comparison. In terms of limb length, shape of the hip bones, and position of the hip socket, Hesperornis is particularly similar to the common loon (Gavia immer), probably exhibiting a very similar manner of locomotion on land and in water. Like loons, Hesperornis were probably excellent foot-propelled divers, but might have been ungainly on land. Whereas its tibiotarsus moved like that of a loon while propelling Hesperornis through water, its toes moved and functioned more like those of a grebe while undergoing propulsion in water. Like loons, the legs were probably encased inside the body wall up to the ankle, causing the feet to jut out to the sides near the tail. This would have prevented them from bringing the legs underneath the body to stand, or under the center of gravity to walk. Instead, they likely moved on land by pushing themselves along on their bellies, like modern seals. However, more recent studies on hesperornithean hindlimbs suggest they were more functionally similar to those of the still upright walking cormorants. Young Hesperornis grew fairly quickly and continuously to adulthood, as is the case in modern birds, but not Enantiornithes. Pathology A Hesperornis leg bone uncovered in the 1960s was examined by David Burnham, Bruce Rothschild et al. and was found to bear bite marks from a young polycotylid plesiosaur (possibly a Dolichorhynchops or something similar). The Hesperornis's bone, specifically the condyle, shows signs of infection, indicating the bird survived the initial attack and escaped the predator. The discovery was published in the journal Cretaceous Research in 2016.
Biology and health sciences
Prehistoric birds
Animals
1709033
https://en.wikipedia.org/wiki/Great%20Seto%20Bridge
Great Seto Bridge
The is a series of double deck bridges connecting Okayama and Kagawa prefectures in Japan across a series of five small islands in the Seto Inland Sea. Built over the period 1978–1988, it is one of the three routes of the Honshū–Shikoku Bridge Project connecting Honshū and Shikoku islands and the only one to carry rail traffic. The total length is , and the longest span, the Minami Bisan-Seto Bridge, is . Crossing the bridge takes about 20 minutes by car or train. The ferry crossing before the bridge was built took about an hour. The bridges carry two lanes of highway traffic in each direction (Seto-Chūō Expressway) on the upper deck and one railway track in each direction (Seto-Ōhashi Line) on the lower deck. The lower deck was designed to accommodate an additional set of Shinkansen tracks for a proposed construction of the line to Shikoku. History When in 1889 the first railway in Shikoku was completed between Marugame and Kotohira, a member of the Prefectural Parliament, , stated in his speech at the opening ceremony: "The four provinces of Shikoku are like so many remote islands. If united by roads, they will be much better off, enjoying the benefits of increased transportation and easier communication with each other." While it took a century for this vision of a bridge across the Seto Inland Sea to become reality, another of Ōkubo's ideas, mentioned in a drinking song he composed, was accomplished twenty years sooner: I'll tell you, dear, don't laugh at me, a hundred years from now, I'll be seeing you flying to and from the moon in a space ship. Its port, let me tell you, dear, will be that mountaintop over there! The bridge idea lay dormant for about sixty years. In 1955, after 171 people died when a ferry wrecked in dense fog off the coast of Takamatsu, a safer crossing was deemed necessary. By 1959, meetings were held to promote building the bridge. Scientists began investigations shortly after, and in 1970, the Honshu-Shikoku Bridge Construction Authority was inaugurated. However, work was postponed for five years by the "oil shock" of 1973; once the Environment Assessment Report was published in 1978, construction got underway. The ferry disaster also led to the creation of the Akashi Kaikyō Bridge. The project took ten years to complete at a cost of US$7 billion; of concrete and 705,000 tons of steel were used in construction. Although nets, ropes and other safety measures were employed, 17 workers were killed during the 10 years of construction. The bridge opened to road and rail traffic on April 10, 1988 by then-Crown Prince Akihito. At opening time, the toll fee for a one-way drive on the bridge's highway cost ¥ 6300. Constituent bridges Six of the eleven bridges are separately named, unlike some other long bridge complexes such as the San Francisco–Oakland Bay Bridge. The other five bridges are viaducts. The six named bridges from north to south are listed below. Shimotsui-Seto Bridge The is a double-decked suspension bridge with a center span of and a total length of which connects Honshū with the island of Hitsuishijima. It is the 45th largest suspension bridge in the world. It is the northernmost bridge of the Seto-Chuo Expressway. Hitsuishijima Bridge The is a double-decked cable-stayed bridge with a center span of and a total length of . It is immediately north of the identical Iwakurojima Bridge. Iwakurojima Bridge The is a double-decked cable-stayed bridge with a center span of and a total length of . It is immediately south of the identical Hitsuishijima Bridge. Yoshima Bridge The is a continuous double-decked truss bridge with a main span of and a total of five spans with a length of . It is immediately south of the Hitsuishijima and Iwakurojima Bridges. Kita Bisan-Seto Bridge The is a double-decked suspension bridge with two sections linked by a common anchorage between them. The center span is and the total length is . It is the 19th largest suspension bridge in the world. The nearly identical Minami Bisan Seto Bridge is located immediately to the south. Minami Bisan-Seto Bridge The is a double-decked suspension bridge with a center span of and a total length of . It is the 13th longest suspension bridge span in the world. It is the southernmost part of the Great Seto Bridge. The roadway of the bridge is above sea level. Sister bridges Golden Gate Bridge, San Francisco, California, United States Affiliated from April 5, 1988 Fatih Sultan Mehmet Bridge, Istanbul, Turkey Affiliated from July 3, 1988 Øresund Bridge, Malmo, Sweden and Copenhagen, Denmark Affiliated from May 24, 2008
Technology
Bridges
null
1709062
https://en.wikipedia.org/wiki/Chlamydia%20pneumoniae
Chlamydia pneumoniae
Chlamydia pneumoniae is a species of Chlamydia, an obligate intracellular bacterium that infects humans and is a major cause of pneumonia. It was known as the Taiwan acute respiratory agent (TWAR) from the names of the two original isolates – Taiwan (TW-183) and an acute respiratory isolate designated AR-39. Briefly, it was known as Chlamydophila pneumoniae, and that name is used as an alternate in some sources. In some cases, to avoid confusion, both names are given. Chlamydia pneumoniae has a complex life cycle and must infect another cell to reproduce; thus, it is classified as an obligate intracellular pathogen. The full genome sequence for C. pneumoniae was published in 1999. It also infects and causes disease in koalas, emerald tree boas (Corallus caninus), iguanas, chameleons, frogs, and turtles. The first known case of infection with C. pneumoniae was a case of conjunctivitis in Taiwan in 1950. There are no known cases of C. pneumoniae in human history before 1950. This atypical bacterium commonly causes pharyngitis, bronchitis, coronary artery disease and atypical pneumonia in addition to several other possible diseases. Life cycle and method of infection Chlamydia pneumoniae is a small gram-negative bacterium (0.2 to 1 μm) that undergoes several transformations during its life cycle. It exists as an elementary body (EB) between hosts. The EB is not biologically active, but is resistant to environmental stresses and can survive outside a host for a limited time. The EB travels from an infected person to the lungs of an uninfected person in small droplets and is responsible for infection. Once in the lungs, the EB is taken up by cells in a pouch called an endosome by a process called phagocytosis. However, the EB is not destroyed by fusion with lysosomes, as is typical for phagocytosed material. Instead, it transforms into a reticulate body (RB) and begins to replicate within the endosome. The reticulate bodies must use some of the host's cellular metabolism to complete its replication. The reticulate bodies then convert back to elementary bodies and are released back into the lung, often after causing the death of the host cell. The EBs are thereafter able to infect new cells, either in the same organism or in a new host. Thus, the lifecycle of C. pneumoniae is divided between the elementary body, which is able to infect new hosts but cannot replicate, and the reticulate body, which replicates but is not able to cause a new infection. Diseases Chlamydia pneumoniae is a common cause of pneumonia around the world; it is typically acquired by otherwise-healthy people and is a form of community-acquired pneumonia. Its treatment and diagnosis are different from historically recognized causes, such as Streptococcus pneumoniae. Because it does not gram stain well, and because C. pneumoniae bacteria is very different from the many other bacteria causing pneumonia (in the earlier days, it was even thought to be a virus), the pneumonia caused by C. pneumoniae is categorized as an "atypical pneumonia". One meta-analysis of serological data comparing prior C. pneumoniae infection in patients with and without lung cancer found results suggesting prior infection was associated with an increased risk of developing lung cancer. In research into the association between C. pneumoniae infection and atherosclerosis and coronary artery disease, serological testing, direct pathologic analysis of plaques, and in vitro testing suggest infection with C. pneumoniae is a significant risk factor for development of atherosclerotic plaques and atherosclerosis. C. pneumoniae infection increases adherence of macrophages to endothelial cells in vitro and aortas ex vivo. However, most current research and data are insufficient and do not define how often C. pneumoniae is found in atherosclerotic or normal vascular tissue. Chlamydia pneumoniae has also been found in the cerebrospinal fluid of patients diagnosed with multiple sclerosis. Chlamydia pneumoniae infection was first associated with wheezing, asthmatic bronchitis, and adult-onset asthma in 1991. Subsequent studies of bronchoalveolar lavage fluid from pediatric patients with asthma and also other severe chronic respiratory illnesses have demonstrated that over 50 percent had evidence of C. pneumoniae by direct organism identification. C. pneumoniae infection triggers acute wheezing, if it becomes chronic then it is diagnosed as asthma. These observations suggest that acute C. pneumoniae infection is capable of causing protean manifestations of chronic respiratory illness which lead to asthma. Macrolide antibiotic treatment can improve asthma in a subgroup of patients that remains to be clearly defined. Macrolide benefits were first suggested in two observational trials and two randomized controlled trials of azithromycin treatment for asthma. One of these RCTs and another macrolide trial suggest that the treatment effect may be greatest in patients with severe, refractory asthma. These clinical results correlate with epidemiological evidence that C. pneumoniae is positively associated with asthma severity and laboratory evidence that C. pneumoniae infection creates steroid-resistance. A meta analysis of 12 RCTs of macrolides for the long term management of asthma found significant effects on asthma symptoms, quality of life, bronchial hyper reactivity and peak flow but not FEV1. More recent positive results of long-term treatment with azithromycin on asthma exacerbations and quality-of-life in patients with severe, refractory asthma have resulted in azithromycin now being recommended in international guidelines as a treatment option for these types of patients. A recent case series of 101 adults with asthma reported that macrolides (mostly azithromycin) and tetracyclines, either separately or in combination, appeared to be dramatically efficacious in a subgroup of "difficult-to-treat" (i.e., not necessarily refractory to high-dose inhaled corticosteroids but who did not take them) patients with severe asthma, many of whom also had the "overlap syndrome" (asthma and COPD). Randomized, controlled trials that include these types of asthma patients are needed. Chlamydia pneumoniae infection has been associated with schizophrenia. Many other pathogens have been associated with schizophrenia as well. Chronic Chlamydia pneumoniae infection has also in some cases been found to be a cause of chronic fatigue syndrome (CFS) that can be resolved with antibiotics. Treatment The first-line antibiotics for treatment of Chlamydia pneumoniae are the macrolide erythromycin and the tetracyclines tetracycline and doxycycline. The macrolides clarithromycin and azithromycin are also effective. Chlamydia pneumoniae shows resistance to penicillin, ampicillin, and sulfa drugs, and hence these antibiotics are not recommended. Other antibiotics which may be effective include fluoroquinolones like levofloxacin, gatifloxacin, gemifloxacin, and moxifloxacin. Symptoms of Chlamydia pneumoniae often reappear after short or conventional courses of antibiotics. As a result, following confirmation of persistent infection with culture, intensive long-term treatment is recommended. Vaccine research There is currently no vaccine to protect against Chlamydia pneumoniae. Identification of immunogenic antigens is critical for the construction of an efficacious subunit vaccine against C. pneumoniae infections. Additionally, there is a general shortage worldwide of facilities that can identify/diagnose Chlamydia pneumoniae.
Biology and health sciences
Gram-negative bacteria
Plants
1710722
https://en.wikipedia.org/wiki/Pyrrhocoris%20apterus
Pyrrhocoris apterus
The firebug, Pyrrhocoris apterus, is a common insect of the family Pyrrhocoridae. Easily recognizable due to its striking red and black coloration, it may be confused with the similarly coloured though unrelated Corizus hyoscyami (cinnamon bug or squash bug). Pyrrhocoris apterus is distributed throughout the Palaearctic from the Atlantic coast of Europe to northwest China. It has also been reported from the United States, Central America, and India, and is also found in Australia. It has been reported as recently expanding its distribution northwards into mainland United Kingdom and eastward on to the coast of the Mediterranean Sea. They are frequently observed to form aggregations, especially as immature forms, containing from tens to perhaps a hundred individuals. Reproduction Firebugs generally mate in April and May. Their diet consists primarily of seeds from lime trees and mallows (see below). They can often be found in groups near the base of lime tree trunks, on the sunny side. They can be seen in tandem formation when mating which can take from 12 hours up to 7 days. The long period of copulating is probably used by the males as a form of ejaculate-guarding under high competition with other males. Development P. apterus was the subject of an unexpected discovery in the 1960s when researchers who had for ten years been rearing the bugs in Prague, Czech Republic, attempted to do the same at Harvard University in the United States. After the fifth nymphal instar, instead of developing into adults, the bugs either entered a sixth instar stage, or became adults with nymphal characteristics. Some of the sixth instars went on to a seventh instar, but all specimens died without reaching maturity. The source of the problem was eventually proven to be the paper towels used in the rearing process; the effect only happened if the paper towels were made in America. The researchers could replicate these results with American newspapers such as the New York Times, but not European newspapers such as The Times. The cause was found to be hormones found in the native balsam fir tree (Abies balsamea) used to manufacture paper and related products in America, and in some other North American conifers. This hormone happened to have a profound effect on P. apterus, but not on other insect species, showing the diversification of hormone receptors in the insects. The most potent chemical component was later identified as juvabione, the methyl ester of todomatuic acid, which is produced by the trees in response to wounding; it mimics juvenile hormone closely at the chemical level, defending against vulnerable pests. Gallery Possible confusion
Biology and health sciences
Hemiptera (true bugs)
Animals
13973166
https://en.wikipedia.org/wiki/Sagittarius%20B2
Sagittarius B2
Sagittarius B2 (Sgr B2) is a giant molecular cloud of gas and dust that is located about from the center of the Milky Way. This complex is the largest molecular cloud in the vicinity of the core and one of the largest in the galaxy, spanning a region about across. The total mass of Sgr B2 is about 3 million times the mass of the Sun. The mean hydrogen density within the cloud is 3000 atoms per cm3, which is about 20–40 times denser than a typical molecular cloud. The internal structure of this cloud is complex, with varying densities and temperatures. The cloud is divided into three main cores, designated north (N), middle or main (M) and south (S) respectively. Thus Sgr B2(N) represents the north core. The sites Sgr B2(M) and Sgr B2(N) are sites of prolific star formation. The first 10 H II regions discovered were designated A through J. H II regions A–G, I and J lie within Sgr B2(M), while region K is in Sgr B2(N) and region H is in Sgr B2(S). The 5-parsec-wide core of the cloud is a star-forming region that is emitting about 10 million times the luminosity of the Sun. The cloud is composed of various kinds of complex molecules, of particular interest: alcohol. The cloud contains ethanol, vinyl alcohol, and methanol. This is due to the conglomeration of atoms resulting in new molecules. The composition was discovered via spectrograph in an attempt to discover amino acids. An ester, ethyl formate, was also discovered, which is a major precursor to amino acids. This ester is also responsible for the flavour of raspberries, leading some articles on Sagittarius B2 to postulate the cloud as smelling of ‘raspberry rum’. Large quantities of butyronitrile (propyl cyanide) and other alkyl cyanides have also been detected as being present in the cloud. Temperatures in the cloud vary from in dense star-forming regions to in the surrounding envelope. Because the average temperature and pressure in Sgr B2 are low, chemistry based on the direct interaction of atoms is exceedingly slow. However, the Sgr B2 complex contains cold dust grains consisting of a silicon core surrounded by a mantle of water ice and various carbon compounds. The surfaces of these grains allow chemical reactions to occur by accreting molecules that can then interact with neighboring compounds. The resulting compounds can then evaporate from the surface and join the molecular cloud. The molecular components of this cloud can be readily observed in the 102–103 μm range of wavelengths. About half of all the known interstellar molecules were first found near Sgr B2, and nearly every other currently known molecule has since been detected in this feature. The European Space Agency's gamma-ray observatory INTEGRAL has observed gamma rays interacting with Sgr B2, causing X-ray emission from the molecular cloud. This energy was emitted about 350 years prior by the supermassive black hole (SMBH) at the galaxy's core, Sagittarius A*. The total luminosity from this outburst is an estimated million times stronger than the current output from Sagittarius A*. This conclusion was supported in 2011 by Japanese astronomers who observed the Galactic Center with the Suzaku satellite.
Physical sciences
Notable nebulae
Astronomy
13980768
https://en.wikipedia.org/wiki/Visual%20Studio
Visual Studio
Visual Studio is an integrated development environment (IDE) developed by Microsoft. It is used to develop computer programs including websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms including Windows API, Windows Forms, Windows Presentation Foundation (WPF), Microsoft Store and Microsoft Silverlight. It can produce both native code and managed code. Visual Studio includes a code editor supporting IntelliSense (the code completion component) as well as code refactoring. The integrated debugger works as both a source-level debugger and as a machine-level debugger. Other built-in tools include a code profiler, designer for building GUI applications, web designer, class designer, and database schema designer. It accepts plug-ins that expand the functionality at almost every level—including adding support for source control systems (like Subversion and Git) and adding new toolsets like editors and visual designers for domain-specific languages or toolsets for other aspects of the software development lifecycle (like the Azure DevOps client: Team Explorer). Visual Studio supports 36 different programming languages and allows the code editor and debugger to support (to varying degrees) nearly any programming language, provided a language-specific service exists. Built-in languages include C, C++, C++/CLI, Visual Basic .NET, C#, F#, JavaScript, TypeScript, XML, XSLT, HTML, and CSS. Support for other languages such as Python, Ruby, Node.js, and M among others is available via plug-ins. Java (and J#) were supported in the past. The most basic edition of Visual Studio, the Community edition, is available free of charge. The slogan for Visual Studio Community edition is "Free, fully-featured IDE for students, open-source and individual developers". , Visual Studio 2022 is a current production-ready version. Visual Studio 2013, 2015 and 2017 are on Extended Support, while 2019 is on Mainstream Support. Architecture Visual Studio does not support any programming language, solution or tool intrinsically; instead, it allows the plugging of functionality coded as a VSPackage. When installed, the functionality is available as a Service. The IDE provides three services: SVsSolution, which provides the ability to enumerate projects and solutions; SVsUIShell, which provides windowing and UI functionality (including tabs, toolbars, and tool windows); and SVsShell, which deals with registration of VSPackages. In addition, the IDE is also responsible for coordinating and enabling communication between services. All editors, designers, project types and other tools are implemented as VSPackages. Visual Studio uses COM to access the VSPackages. The Visual Studio SDK also includes the Managed Package Framework (MPF), which is a set of managed wrappers around the COM-interfaces that allow the Packages to be written in any CLI compliant language. However, MPF does not provide all the functionality exposed by the Visual Studio COM interfaces. The services can then be consumed for creation of other packages, which add functionality to the Visual Studio IDE. Support for programming languages is added by using a specific VSPackage called a Language Service. A language service defines various interfaces which the VSPackage implementation can implement to add support for various functionalities. Functionalities that can be added this way include syntax coloring, statement completion, brace matching, parameter information tooltips, member lists, and error markers for background compilation. If the interface is implemented, the functionality will be available for the language. Language services are implemented on a per-language basis. The implementations can reuse code from the parser or the compiler for the language. Language services can be implemented either in native code or managed code. For native code, either the native COM interfaces or the Babel Framework (part of Visual Studio SDK) can be used. For managed code, the MPF includes wrappers for writing managed language services. Visual Studio does not include any source control support built in but it defines two alternative ways for source control systems to integrate with the IDE. A Source Control VSPackage can provide its own customised user interface. In contrast, a source control plugin using the MSSCCI (Microsoft Source Code Control Interface) provides a set of functions that are used to implement various source control functionality, with a standard Visual Studio user interface. MSSCCI was first used to integrate Visual SourceSafe with Visual Studio 6.0 but was later opened up via the Visual Studio SDK. Visual Studio .NET 2002 used MSSCCI 1.1, and Visual Studio .NET 2003 used MSSCCI 1.2. Visual Studio 2005, 2008, and 2010 use MSSCCI Version 1.3, which adds support for rename and delete propagation, as well as asynchronous opening. Visual Studio supports running multiple instances of the environment (each with its own set of VSPackages). The instances use different registry hives (see MSDN's definition of the term "registry hive" in the sense used here) to store their configuration state and are differentiated by their AppId (Application ID). The instances are launched by an AppId-specific .exe that selects the AppId, sets the root hive, and launches the IDE. VSPackages registered for one AppId are integrated with other VSPackages for that AppId. The various product editions of Visual Studio are created using the different AppIds. The Visual Studio Express edition products are installed with their own AppIds, but the Standard, Professional, and Team Suite products share the same AppId. Consequently, one can install the Express editions side-by-side with other editions, unlike the other editions which update the same installation. The professional edition includes a superset of the VSPackages in the standard edition, and the team suite includes a superset of the VSPackages in both other editions. The AppId system is leveraged by the Visual Studio Shell in Visual Studio 2008. Features Code editor Visual Studio includes a code editor that supports syntax highlighting and code completion using IntelliSense for variables, functions, methods, loops, and LINQ queries. IntelliSense is supported for the included languages, as well as for XML, Cascading Style Sheets, and JavaScript when developing web sites and web applications. Autocomplete suggestions appear in a modeless list box over the code editor window, in proximity of the editing cursor. In Visual Studio 2008 onwards, it can be made temporarily semi-transparent to see the code obstructed by it. The code editor is used for all supported languages. The code editor in Visual Studio also supports setting bookmarks in code for quick navigation. Other navigational aids include collapsing code blocks and incremental search, in addition to normal text search and regex search. The code editor also includes a multi-item clipboard and a task list. The code editor supports code snippets, which are saved templates for repetitive code and can be inserted into code and customized for the project being worked on. A management tool for code snippets is built in as well. These tools are surfaced as floating windows which can be set to automatically hide when unused or docked to the side of the screen. The code editor in Visual Studio also supports code refactoring including parameter reordering, variable and method renaming, interface extraction, and encapsulation of class members inside properties, among others. Debugger Visual Studio includes a debugger that works both as a source-level debugger and as a machine-level debugger. It works with both managed code as well as native code and can be used for debugging applications written in any language supported by Visual Studio. In addition, it can also attach to running processes, monitor, and debug those processes. If source code for the running process is available, it displays the code as it is being run. If source code is not available, it can show the disassembly. The Visual Studio debugger can also create memory dumps as well as load them later for debugging. Multi-threaded programs are also supported. The debugger can be configured to be launched when an application running outside the Visual Studio environment crashes. The Visual Studio Debugger allows setting breakpoints (which allow execution to be stopped temporarily at a certain position) and watches (which monitor the values of variables as the execution progresses). Breakpoints can be conditional, meaning they get triggered when the condition is met. Code can be stepped over, i.e., run one line (of source code) at a time. It can either step into functions to debug inside it, or step over it, i.e., the execution of the function body isn't available for manual inspection. The debugger supports Edit and Continue, i.e., it allows code to be edited as it is being debugged. When debugging, if the mouse pointer hovers over any variable, its current value is displayed in a tooltip ("data tooltips"), where it can also be modified if desired. During coding, the Visual Studio debugger lets certain functions be invoked manually from the Immediate tool window. The parameters to the method are supplied at the Immediate window. Designer Visual Studio includes many visual designers to aid in the development of applications. These tools include: Windows Forms Designer The Windows Forms designer is used to build GUI applications using Windows Forms. Layout can be controlled by housing the controls inside other containers or locking them to the side of the form. Controls that display data (like textbox, list box and grid view) can be bound to data sources like databases or queries. Data-bound controls can be created by dragging items from the Data Sources window onto a design surface. The UI is linked with code using an event-driven programming model. The designer generates either C# or VB.NET code for the application. WPF Designer The WPF designer, codenamed Cider, was introduced with Visual Studio 2008. Like the Windows Forms designer it supports the drag and drop metaphor. It is used to author user interfaces targeting Windows Presentation Foundation. It supports all WPF functionality including data binding and automatic layout management. It generates XAML code for the UI. The generated XAML file is compatible with Microsoft Expression Design, the designer-oriented product. The XAML code is linked with code using a code-behind model. Web designer/development Visual Studio also includes a web-site editor and designer that allows web pages to be authored by dragging and dropping widgets. It is used for developing ASP.NET applications and supports HTML, CSS and JavaScript. It uses a code-behind model to link with ASP.NET code. From Visual Studio 2008 onwards, the layout engine used by the web designer is shared with the discontinued Expression Web. There is also ASP.NET MVC support for MVC technology as a separate download and ASP.NET Dynamic Data project available from Microsoft. Class designer The Class Designer is used to author and edit the classes (including its members and their access) using UML modeling. The Class Designer can generate C# and VB.NET code outlines for the classes and methods. It can also generate class diagrams from hand-written classes. Data designer The data designer can be used to graphically edit database schemas, including typed tables, primary and foreign keys and constraints. It can also be used to design queries from the graphical view. Mapping designer From Visual Studio 2008 onwards, the mapping designer is used by LINQ to SQL to design the mapping between database schemas and the classes that encapsulate the data. The new solution from ORM approach, ADO.NET Entity Framework, replaces and improves the old technology. Other tools Properties Editor The Properties Editor tool is used to edit properties in a GUI pane inside Visual Studio. It lists all available properties (both read-only and those which can be set) for all objects including classes, forms, web pages and other items. Object Browser The Object Browser is a namespace and class library browser for Microsoft .NET. It can be used to browse the namespaces (which are arranged hierarchically) in managed assemblies. The hierarchy may or may not reflect the organization in the file system. Solution Explorer In Visual Studio parlance, a solution is a set of code files and other resources that are used to build an application. The files in a solution are arranged hierarchically, which might or might not reflect the organization in the file system. The Solution Explorer is used to manage and browse the files in a solution. Team Explorer Team Explorer is used to integrate the capabilities of Azure DevOps (either Azure DevOps Services or Azure DevOps Server) into the IDE . In addition to version control integration it provides the ability to view and manage individual work items (including user stories, bugs, tasks and other documents). It is included as part of a Visual Studio installation and is also available as a standalone download. Data Explorer Data Explorer is used to manage databases on Microsoft SQL Server instances. It allows creation and alteration of database tables (either by issuing T-SQL commands or by using the Data designer). It can also be used to create queries and stored procedures, with the latter in either T-SQL or in managed code via SQL CLR. Debugging and IntelliSense support is available as well. Server Explorer The Server Explorer tool is used to manage database connections on an accessible computer. It is also used to browse running Windows Services, performance counters, Windows Event Log and message queues and use them as a datasource. Dotfuscator Community Edition Visual Studio includes a free 'light' version of Dotfuscator Text Generation Framework Visual Studio includes a full text generation framework called T4 which enables Visual Studio to generate text files from templates either in the IDE or via code. ASP.NET Web Site Administration Tool The ASP.NET Web Site Administration Tool allows for the configuration of ASP.NET websites. Visual Studio Tools for Office Visual Studio Tools for Office is a SDK and an add-in for Visual Studio that includes tools for developing for the Microsoft Office suite. Previously (for Visual Studio .NET 2003 and Visual Studio 2005) it was a separate SKU that supported only Visual C# and Visual Basic languages or was included in the Team Suite. With Visual Studio 2008, it is no longer a separate SKU but is included with Professional and higher editions. A separate runtime is required when deploying VSTO solutions. Testing tools Microsoft Visual Studio can write high-quality code with comprehensive testing tools to aid in the development of applications. These tools include: Unit testing, IntelliTest, Live Unit Testing, Test Explorer, CodeLens test indicators, code coverage analysis, Fakes. Extensibility Visual Studio allows developers to write extensions for Visual Studio to extend its capabilities. These extensions "plug into" Visual Studio and extend its functionality. Extensions come in the form of macros, add-ins, and packages. Macros represent repeatable tasks and actions that developers can record programmatically for saving, replaying, and distributing. Macros, however, cannot implement new commands or create tool windows. They are written using Visual Basic and are not compiled. Add-Ins provide access to the Visual Studio object model and can interact with the IDE tools. Add-Ins can be used to implement new functionality and can add new tool windows. Add-Ins are plugged into the IDE via COM and can be created in any COM-compliant languages. Packages are created using the Visual Studio SDK and provide the highest level of extensibility. They can create designers and other tools, as well as integrate other programming languages. The Visual Studio SDK provides unmanaged APIs as well as a managed API to accomplish these tasks. However, the managed API isn't as comprehensive as the unmanaged one. Extensions are supported in the Standard (and higher) versions of Visual Studio 2005. Express Editions do not support hosting extensions. Visual Studio 2008 introduced the Visual Studio Shell that allows for development of a customized version of the IDE. The Visual Studio Shell defines a set of VSPackages that provide the functionality required in any IDE. On top of that, other packages can be added to customize the installation. The Isolated mode of the shell creates a new AppId where the packages are installed. These are to be started with a different executable. It is aimed for development of custom development environments, either for a specific language or a specific scenario. The Integrated mode installs the packages into the AppId of the Professional/Standard/Team System editions, so that the tools integrate into these editions. The Visual Studio Shell is available as a free download. After the release of Visual Studio 2008, Microsoft created the Visual Studio Gallery. It serves as the central location for posting information about extensions to Visual Studio. Community developers as well as commercial developers can upload information about their extensions to Visual Studio .NET 2002 through Visual Studio 2010. Users of the site can rate and review the extensions to help assess the quality of extensions being posted. An extension is stored in a VSIX file. Internally a VSIX file is a ZIP file that contains some XML files, and possibly one or more DLL's. One of the main advantages of these extensions is that they do not require Administrator rights to be installed. RSS feeds to notify users on updates to the site and tagging features are also planned. Limitations Does not support x64 inline assembly Supported products Microsoft Visual C++Microsoft Visual C++ is Microsoft's partial implementation of the C and full implementation C++ compiler and associated languages-services and specific tools for integration with the Visual Studio IDE. It can compile either in C mode or C++ mode. For C++, as of version 15.7 it conforms to C++17. The C implementation of Visual Studio 2015 still doesn't support the full standard; in particular, the complex number header complex.h introduced in C99 is unsupported. Visual C++ supports the C++/CLI specification to write managed code, as well as mixed-mode code (a mix of native and managed code). Microsoft positions Visual C++ for development in native code or in code that contains both native as well as managed components. Visual C++ supports COM as well as the MFC library. For MFC development, it provides a set of wizards for creating and customizing MFC boilerplate code, and creating GUI applications using MFC. Visual C++ can also use the Visual Studio forms designer to design UI graphically. Visual C++ can also be used with the Windows API. It also supports the use of intrinsic functions, which are functions recognized by the compiler itself and not implemented as a library. Intrinsic functions are used to expose the SSE instruction set of modern CPUs. Visual C++ also includes the OpenMP (version 2.0) specification. Microsoft Visual C# Microsoft Visual C#, Microsoft's implementation of the C# language, targets the .NET Framework, along with the language services that lets the Visual Studio IDE support C# projects. While the language services are a part of Visual Studio, the compiler is available separately as a part of the .NET Framework. The Visual C# 2008, 2010 and 2012 compilers support versions 3.0, 4.0 and 5.0 of the C# language specifications, respectively. Visual C# supports the Visual Studio Class designer, Forms designer, and Data designer among others. Microsoft Visual Basic Microsoft Visual Basic is Microsoft's implementation of the VB.NET language and associated tools and language services. It was introduced with Visual Studio .NET (2002). Microsoft has positioned Visual Basic for Rapid Application Development. Visual Basic can be used to author both console applications as well as GUI applications. Like Visual C#, Visual Basic also supports the Visual Studio Class designer, Forms designer, and Data designer among others. Like C#, the VB.NET compiler is also available as a part of .NET Framework, but the language services that let VB.NET projects be developed with Visual Studio, are available as a part of the latter. Microsoft Visual Web Developer Microsoft Visual Web Developer is used to create web sites, web applications and web services using ASP.NET. Either C# or VB.NET languages can be used. Visual Web Developer can use the Visual Studio Web Designer to graphically design web page layouts. Azure DevOpsAzure DevOps is intended for collaborative software development projects and provides version control, work planning and tracking, data collection, and reporting. It also includes the Team Explorer which is integrated inside Visual Studio. On September 10, 2018, Microsoft announced a rebranding of Visual Studio Team Services (VSTS) to Azure DevOps Services and Team Foundation Server (TFS) to Azure DevOps Server. Previous products Visual FoxPro Visual FoxPro is a data-centric object-oriented and procedural programming language produced by Microsoft. It derives from FoxPro (originally known as FoxBASE) which was developed by Fox Software beginning in 1984. Visual FoxPro is tightly integrated with its own relational database engine, which extends FoxPro's xBase capabilities to support SQL queries and data manipulation. Visual FoxPro is a full-featured, dynamic programming language that does not require the use of an additional general-purpose programming environment. In 2007, Visual FoxPro was discontinued after version 9 Service Pack 2. It was supported until 2015. Visual SourceSafe Microsoft Visual SourceSafe is a source control software package oriented towards small software-development projects. The SourceSafe database is a multi-user, multi-process file-system database, using the Windows file system database primitives to provide locking and sharing support. All versions are multi-user, using SMB (file server) networking. However, with Visual SourceSafe 2005, other client–server modes were added, Lan Booster and VSS Internet (which used HTTP/HTTPS). Visual SourceSafe 6.0 was available as a stand-alone product and was included with Visual Studio 6.0, and other products such as Office Developer Edition. Visual SourceSafe 2005 was available as a stand-alone product and included with the 2005 Team Suite. Azure DevOps has superseded VSS as Microsoft's recommended platform for source control. Microsoft Visual J++/Microsoft Visual J# Microsoft Visual J++ was Microsoft's implementation of the Java language (with Microsoft-specific extensions) and associated language services. It was discontinued as a result of litigation from Sun Microsystems, and the technology was recycled into Visual J#, Microsoft's Java compiler for .NET Framework. J# was available with Visual Studio 2005 (supported until 2015) but was discontinued in Visual Studio 2008. Visual InterDev Visual InterDev was used to create web applications using Microsoft Active Server Pages (ASP) technologies. It supports code completion and includes database server management tools. It has been replaced with Microsoft Visual Web Developer. Editions Microsoft Visual Studio is available in the following editions or SKUs: Community The Community edition was announced on November 12, 2014, as a new free version, with similar functionality to Visual Studio Professional. Prior to this date, the only free editions of Visual Studio were the feature-limited Express variants. Unlike the Express variants, Visual Studio Community supports multiple languages, and provides support for extensions. Individual developers have no restrictions on their use of the Community edition. The following uses also allow unlimited usage: contributing to Open Source projects, academic research, in a classroom learning environment and for developing and testing device drivers for the Windows operating system. All other use by an organization depends on its classification as an Enterprise (more than 250 employees or more than 1 million USD in annual revenue, per Microsoft). Non-Enterprises may use up to 5 copies without restriction, user number 6 and higher require a commercial license; Enterprise organizations require a commercial license for use outside of the noted exceptions. Visual Studio Community is oriented towards individual developers and small teams. Professional As of Visual Studio 2010, the Professional edition is the entry level commercial edition of Visual Studio. (Previously, a more feature restricted Standard edition was available.) It provides an IDE for all supported development languages. MSDN support is available as MSDN Essentials or the full MSDN library depending on licensing. It supports XML and XSLT editing, and can create deployment packages that only use ClickOnce and MSI. It includes tools like Server Explorer and integration with Microsoft SQL Server also. Windows Mobile development support was included in Visual Studio 2005 Standard, however, with Visual Studio 2008, it is only available in Professional and higher editions. Windows Phone 7 development support was added to all editions in Visual Studio 2010. Development for Windows Mobile is no longer supported in Visual Studio 2010. It is superseded by Windows Phone 7. Enterprise In addition to the features provided by the Professional edition, the Enterprise edition provides a new set of software development, database development, collaboration, metrics, architecture, testing and reporting tools. History The first version of Visual Studio was Visual Studio 97. Before that, Visual Basic, Visual C++, Visual FoxPro and Visual SourceSafe were sold as separate products. 97 Microsoft first released Visual Studio (codenamed Boston, for the city of the same name, thus beginning the VS codenames related to places) in 1997, bundling many of its programming tools together for the first time. Visual Studio 97 came in two editions: Visual Studio Professional and Visual Studio Enterprise, the professional edition has three CDs, and the enterprise four CDs. It included Visual J++ 1.1 for Java programming and introduced Visual InterDev for creating dynamically generated web sites using Active Server Pages. There was a single companion CD that contained the Microsoft Developer Network library. Visual Studio 97 is only compatible with Windows 95 and Windows NT 4.0. It is the last version to support Windows NT 4.0 before SP3. Visual Studio 97 was an attempt at using the same development environment for multiple languages. Visual J++, InterDev, and the MSDN Library had all been using the same 'environment', called Developer Studio. Visual Studio was also sold as a bundle with the separate IDEs used for Visual C++, Visual Basic and Visual FoxPro. 6.0 (1998) The next version, version 6.0 (codenamed Aspen, after the ski resort in Colorado), was released in June 1998 and is the last version to support the Windows 9x platform, as well as Windows NT 4.0 before SP6, but after SP2. Each version of each language in part also settled to v6.0, including Visual J++ which was prior v1.1, and Visual InterDev at the first release. The v6 edition of Microsoft was the core environment for the next four releases to provide programmers with an integrated look-alike platform. This led Microsoft to transition the development on the platform independent .NET Framework. Visual Studio 6.0 was the last version to include Visual J++, which Microsoft removed as part of a settlement with Sun Microsystems that required Microsoft Internet Explorer not to provide support for the Java Virtual Machine. Visual Studio 6.0 came in two editions: Professional and Enterprise. The Enterprise edition contained extra features not found in Professional edition, including: Application Performance Explorer Automation Manager Microsoft Visual Modeler RemAuto Connection Manager Visual Studio Analyzer Visual Studio was also sold as a bundle with the separate IDEs used for Visual C++, Visual Basic and Visual FoxPro. .NET 2002 Microsoft released Visual Studio .NET (VS.NET), codenamed Rainier (for Washington's Mount Rainier), in February 2002 (the beta version was released via Microsoft Developer Network in 2001). The biggest change was the introduction of a managed code development environment using the .NET Framework. Programs developed using .NET are not compiled to machine language (like C++ is, for example) but instead to a format called Microsoft Intermediate Language (MSIL) or Common Intermediate Language (CIL). When a CIL application executes, it is compiled while being executed into the appropriate machine language for the platform it is being executed on, thereby making code portable across several platforms. Programs compiled into CIL can be executed only on platforms which have an implementation of Common Language Infrastructure. It is possible to run CIL programs in Linux or Mac OS X using non-Microsoft .NET implementations like Mono and DotGNU. This was the first version of Visual Studio to require an NT-based Windows platform. The installer enforces this requirement, and is the last version to support Windows NT 4.0 SP6 or later and Windows 2000 before SP3. Visual Studio .NET 2002 shipped in four editions: Academic, Professional, Enterprise Developer, and Enterprise Architect. Microsoft introduced C# (C-sharp), a new programming language, that targets .NET. It also introduced the successor to Visual J++ called Visual J#. Visual J# programs use Java's language-syntax. However, unlike Visual J++ programs, Visual J# programs can only target the .NET Framework, not the Java Virtual Machine that all other Java tools target. Visual Basic changed drastically to fit the new framework, and the new version was called Visual Basic .NET. Microsoft also added extensions to C++, called Managed Extensions for C++, so .NET programs could be created in C++. Visual Studio .NET can produce applications targeting Windows (using the Windows Forms part of the .NET Framework), the Web (using ASP.NET and Web Services) and, with an add-in, portable devices (using the .NET Compact Framework). The internal version number of Visual Studio .NET 2002 is version 7.0. Microsoft released Service Pack 1 for Visual Studio .NET 2002 in March 2005. .NET 2003 In April 2003, Microsoft introduced a minor upgrade to Visual Studio .NET called Visual Studio .NET 2003, codenamed Everett (for the city of the same name). It includes an upgrade to the .NET Framework, version 1.1, and is the first release to support developing programs for mobile devices, using ASP.NET or the .NET Compact Framework. The Visual C++ compiler's standards-compliance improved, especially in the area of partial template specialization. Visual C++ Toolkit 2003 is a version of the same C++ compiler shipped with Visual Studio .NET 2003 without the IDE that Microsoft made freely available. it is no longer available and the Express Editions have superseded it. Visual Studio .NET 2003 also supports Managed C++, which is the predecessor of C++/CLI. The internal version number of Visual Studio .NET 2003 is version 7.1 while the file format version is 8.0. Visual Studio .NET 2003 drops support for Windows NT 4.0, and is the last version to support Windows 2000 SP3 and Windows XP before SP2 and the only version to support Windows Server 2003 before SP1. Visual Studio .NET 2003 shipped in five editions: Academic, Standard, Professional, Enterprise Developer, and Enterprise Architect. The Visual Studio .NET 2003 Enterprise Architect edition includes an implementation of Microsoft Visio 2002's modeling technologies, including tools for creating Unified Modeling Language-based visual representations of an application's architecture, and an object-role modeling (ORM) and logical database-modeling solution. "Enterprise Templates" were also introduced, to help larger development teams standardize coding styles and enforce policies around component usage and property settings. Service Pack 1 was released September 13, 2006. 2005 Visual Studio 2005, codenamed Whidbey (a reference to Whidbey Island in Puget Sound region), was released online in October 2005 and to retail stores a few weeks later. Microsoft removed the ".NET" moniker from Visual Studio 2005 (as well as every other product with .NET in its name), but it still primarily targets the .NET Framework, which was upgraded to version 2.0. It requires Windows 2000 with Service Pack 4, Windows XP with at least Service Pack 2 or Windows Server 2003 with at least Service Pack 1. It is the last version to run on Windows 2000 and also the last version able to target Windows 98 and Windows Me for C++ applications. Visual Studio 2005's internal version number is 8.0 while the file format version is 9.0. Microsoft released Service Pack 1 for Visual Studio 2005 on December 14, 2006. An additional update for Service Pack 1 that offers Windows Vista compatibility was made available on June 3, 2007. Visual Studio 2005 was upgraded to support all the new features introduced in .NET Framework 2.0, including generics and ASP.NET 2.0. The IntelliSense feature in Visual Studio was upgraded for generics and new project types were added to support ASP.NET web services. Visual Studio 2005 additionally introduces support for a new task-based build platform called Microsoft Build Engine (MSBuild) which employs a new XML-based project file format. Visual Studio 2005 also includes a local web server, separate from IIS, that can host ASP.NET applications during development and testing. It also supports all SQL Server 2005 databases. Database designers were upgraded to support the ADO.NET 2.0, which is included with .NET Framework 2.0. C++ also got a similar upgrade with the addition of C++/CLI which is slated to replace the use of Managed C++. Other new features of Visual Studio 2005 include the "Deployment Designer" which allows application designs to be validated before deployments, an improved environment for web publishing when combined with ASP.NET 2.0 and load testing to see application performance under various sorts of user loads. Starting with the 2005 edition, Visual Studio also added extensive 64-bit support. While the host development environment itself is only available as a 32-bit application, Visual C++ 2005 supports compiling for x86-64 (AMD64 and Intel 64) as well as IA-64 (Itanium). The Platform SDK included 64-bit compilers and 64-bit versions of the libraries. Microsoft also announced Visual Studio Tools for Applications as the successor to Visual Basic for Applications (VBA) and VSA (Visual Studio for Applications). VSTA 1.0 was released to manufacturing along with Office 2007. It is included with Office 2007 and is also part of the Visual Studio 2005 SDK. VSTA consists of a customized IDE, based on the Visual Studio 2005 IDE, and a runtime that can be embedded in applications to expose its features via the .NET object model. Office 2007 applications continue to integrate with VBA, except for InfoPath 2007 which integrates with VSTA. Version 2.0 of VSTA (based on Visual Studio 2008) was released in April 2008. It is significantly different from the first version, including features such as dynamic programming and support for WPF, WCF, WF, LINQ, and .NET 3.5 Framework. 2008 Visual Studio 2008, and Visual Studio Team System 2008 codenamed Orcas (a reference to Orcas Island, also an island in Puget Sound region, like Whidbey for the previous 2005 release), were released to MSDN subscribers on November 19, 2007, alongside .NET Framework 3.5. The source code for the Visual Studio 2008 IDE is available under a shared source license to some of Microsoft's partners and ISVs. Microsoft released Service Pack 1 for Visual Studio 2008 on August 11, 2008. The internal version number of Visual Studio 2008 is version 9.0 while the file format version is 10.0. Visual Studio 2008 requires Windows XP Service Pack 2 plus Windows Installer 3.1, Windows Server 2003 Service Pack 1 or later. It is the last version available for Windows XP SP2, Windows Server 2003 SP1, as well as the only version to support Windows Vista before SP2 and Windows Server 2008 before SP2 and the last version to support targeting Windows 2000 for C++ applications. Visual Studio 2008 is focused on development of Windows Vista, 2007 Office system, and Web applications. For visual design, a new Windows Presentation Foundation visual designer and a new HTML/CSS editor influenced by Microsoft Expression Web are included. J# is not included. Visual Studio 2008 requires .NET 3.5 Framework and by default configures compiled assemblies to run on .NET Framework 3.5, but it also supports multi-targeting which lets the developers choose which version of the .NET Framework (out of 2.0, 3.0, 3.5, Silverlight CoreCLR or .NET Compact Framework) the assembly runs on. Visual Studio 2008 also includes new code analysis tools, including the new Code Metrics tool (only in Team Edition and Team Suite Edition). For Visual C++, Visual Studio adds a new version of Microsoft Foundation Classes (MFC 9.0) that adds support for the visual styles and UI controls introduced with Windows Vista. For native and managed code interoperability, Visual C++ introduces the STL/CLR, which is a port of the C++ Standard Template Library (STL) containers and algorithms to managed code. STL/CLR defines STL-like containers, iterators and algorithms that work on C++/CLI managed objects. Visual Studio 2008 features include an XAML-based designer (codenamed Cider), workflow designer, LINQ to SQL designer (for defining the type mappings and object encapsulation for SQL Server data), XSLT debugger, JavaScript Intellisense support, JavaScript Debugging support, support for UAC manifests, a concurrent build system, among others. It ships with an enhanced set of UI widgets, both for Windows Forms and WPF. It also includes a multithreaded build engine (MSBuild) to compile multiple source files (and build the executable file) in a project across multiple threads simultaneously. It also includes support for compiling icon resources in PNG format, introduced in Windows Vista. An updated XML Schema designer was released separately some time after the release of Visual Studio 2008. Visual Studio Debugger includes features targeting easier debugging of multi-threaded applications. In debugging mode, in the Threads window, which lists all the threads, hovering over a thread displays the stack trace of that thread in tooltips. The threads can directly be named and flagged for easier identification from that window itself. In addition, in the code window, along with indicating the location of the currently executing instruction in the current thread, the currently executing instructions in other threads are also pointed out. The Visual Studio debugger supports integrated debugging of the .NET 3.5 Framework Base Class Library (BCL) which can dynamically download the BCL source code and debug symbols and allow stepping into the BCL source during debugging. a limited subset of the BCL source is available, with more library support planned for later. 2010 On April 12, 2010, Microsoft released Visual Studio 2010, codenamed Dev10, and .NET Framework 4. It is available for Windows Server 2003 SP2, Windows XP SP3, Windows Vista SP2 and Windows Server 2008 SP2 and has support for Windows Server 2008 R2, as well as for Windows 7. It is the last version to support Windows XP SP3, Windows Server 2003 SP2, Windows Server 2003 R2, Windows Vista SP2 and Windows Server 2008 SP2, and the only version to support Windows 7 before SP1 and Windows Server 2008 R2 before SP1. The Visual Studio 2010 IDE was redesigned which, according to Microsoft, clears the UI organization and "reduces clutter and complexity." The new IDE better supports multiple document windows and floating tool windows, while offering better multi-monitor support. The IDE shell has been rewritten using the Windows Presentation Foundation (WPF), whereas the internals have been redesigned using Managed Extensibility Framework (MEF) that offers more extensibility points than previous versions of the IDE that enabled add-ins to modify the behavior of the IDE. The new multi-paradigm ML-variant F# forms part of Visual Studio 2010. Visual Studio 2010 comes with .NET Framework 4 and supports developing applications targeting Windows 7. It supports IBM Db2 and Oracle databases, in addition to Microsoft SQL Server. It has integrated support for developing Microsoft Silverlight applications, including an interactive designer. Visual Studio 2010 offers several tools to make parallel programming simpler: in addition to the Parallel Extensions for the .NET Framework and the Parallel Patterns Library for native code, Visual Studio 2010 includes tools for debugging parallel applications. The new tools allow the visualization of parallel Tasks and their runtime stacks. Tools for profiling parallel applications can be used for visualization of thread wait-times and thread migrations across processor cores. Intel and Microsoft have jointly pledged support for a new Concurrency Runtime in Visual Studio 2010 and Intel has launched parallelism support in Parallel Studio as an add-on for Visual Studio. The Visual Studio 2010 code editor now highlights references; whenever a symbol is selected, all other usages of the symbol are highlighted. It also offers a Quick Search feature to incrementally search across all symbols in C++, C# and VB.NET projects. Quick Search supports substring matches and camelCase searches. The Call Hierarchy feature allows the developer to see all the methods that are called from a current method as well as the methods that call the current one. IntelliSense in Visual Studio supports a consume-first mode which developers can opt into. In this mode, IntelliSense does not auto-complete identifiers; this allows the developer to use undefined identifiers (like variable or method names) and define those later. Visual Studio 2010 can also help in this by automatically defining them, if it can infer their types from usage. Current versions of Visual Studio have a known bug which makes IntelliSense unusable for projects using pure C (not C++). Visual Studio 2010 features a new Help System replacing the MSDN Library viewer. The Help System is no longer based on Microsoft Help 2 and does not use Microsoft Document Explorer. Dynamic help containing links to help items based on what the developer was doing at the time was removed in the final release, but can be added back using a download from Microsoft. Visual Studio 2010 no longer supports development for Windows Mobile prior to Windows Phone 7. Visual Studio 2010 Service Pack 1 was released in March 2011. Ultimate 2010 Visual Studio Ultimate 2010 replaces Visual Studio 2008 Team Suite. It includes new modeling tools, such as the Architecture Explorer, which graphically displays projects and classes and the relationships between them. It supports UML activity diagram, component diagram, (logical) class diagram, sequence diagram, and use case diagram. Visual Studio Ultimate 2010 also includes Test Impact Analysis which provides hints on which test cases are impacted by modifications to the source code, without actually running the test cases. This speeds up testing by avoiding running unnecessary test cases. Visual Studio Ultimate 2010 also includes a historical debugger for managed code called IntelliTrace. Unlike a traditional debugger that records only the currently active stack, IntelliTrace records all events, such as prior function calls, method parameters, events and exceptions. This allows the code execution to be rewound in case a breakpoint was not set where the error occurred. Debugging with IntelliTrace causes the application to run more slowly than debugging without it, and uses more memory as additional data needs to be recorded. Microsoft allows configuration of how much data should be recorded, in effect, allowing developers to balance the speed of execution and resource usage. The Lab Management component of Visual Studio Ultimate 2010 uses virtualization to create a similar execution environment for testers and developers. The virtual machines are tagged with checkpoints which can later be investigated for issues, as well as to reproduce the issue. Visual Studio Ultimate 2010 also includes the capability to record test runs that capture the specific state of the operating environment as well as the precise steps used to run the test. These steps can then be played back to reproduce issues. 2012 The final build of Visual Studio 2012 was announced on August 1, 2012, and the official launch event was held on September 12, 2012. Unlike prior versions, Visual Studio 2012 cannot record and play macros and the macro editor has been removed. Also unlike prior versions, Visual Studio 2012 require Windows 7 SP1 and Windows Server 2008 R2 SP1. New features include support for WinRT and C++/CX (Component Extensions) and C++ AMP (GPGPU programming) Semantic Colorization. Cross-compiling to ARM32 is supported from an x86 command prompt. On September 16, 2011, a complete 'Developer Preview' of Visual Studio 11 was published on Microsoft's website. Visual Studio 11 Developer Preview requires Windows 7, Windows Server 2008 R2, Windows 8, or later operating systems. Versions of Microsoft Foundation Class Library (MFC) and C runtime (CRT) included with this release cannot produce software that is compatible with Windows XP or Windows Server 2003 except by using native multi-targeting and foregoing the newest libraries, compilers, and headers. However, on June 15, 2012, a blog post on the VC++ Team blog announced that based on customer feedback, Microsoft would re-introduce native support for Windows XP targets (though not for XP as a development platform) in a version of Visual C++ to be released later in the fall of 2012. "Visual Studio 2012 Update 1" (Visual Studio 2012.1) was released in November 2012. This update added support for Windows XP targets and also added other new tools and features (e.g. improved diagnostics and testing support for Windows Store apps). On August 24, 2011, a blog post by Sumit Kumar, a Program Manager on the Visual C++ team, listed some of the features of the upcoming version of the Visual Studio C++ IDE: Semantic colorization: Improved syntax coloring, various user-defined or default colors for C++ syntax such as macros, enumerations, typenames and functions. Reference highlighting: Selection of a symbol highlights all of the references to that symbol within scope. New Solution Explorer: The new Solution Explorer allows for visualization of class and file hierarchies within a solution/project. It can search for calls to functions and uses of classes. Automatic display of IntelliSense list: IntelliSense is automatically displayed whilst typing code, as opposed to previous versions where it had to be explicitly invoked through use of certain operators (i.e. the scope operator (::)) or shortcut keys (Ctrl-Space or Ctrl-J). Member list filtering: IntelliSense uses fuzzy logic to determine which functions/variables/types to display in the list. Code snippets: Code snippets are included in IntelliSense to automatically generate relevant code based on the user's parameters, custom code snippets can be created. The source code of Visual Studio 2012 consists of approximately 50 million lines of code. Interface backlash During Visual Studio 11 beta, Microsoft eliminated the use of color within tools except in cases where color is used for notification or status change purposes. However, the use of color was returned after feedback demanding more contrast, differentiation, clarity and "energy" in the user interface. In the Visual Studio 2012 release candidate (RC), a major change to the interface is the use of all-caps menu bar, as part of the campaign to keep Visual Studio consistent with the direction of other Microsoft user interfaces, and to provide added structure to the top menu bar area. The redesign was criticized for being hard to read, and going against the trends started by developers to use CamelCase to make words stand out better. Some speculated that the root cause of the redesign was to incorporate the simplistic look and feel of Metro programs. However, there exists a Windows Registry option to allow users to disable the all-caps interface. 2013 The preview for Visual Studio 2013 was announced at the Build 2013 conference and made available on June 26, 2013. The Visual Studio 2013 RC (Release Candidate) was made available to developers on MSDN on September 9, 2013. The final release of Visual Studio 2013 became available for download on October 17, 2013, along with .NET 4.5.1. Visual Studio 2013 officially launched on November 13, 2013, at a virtual launch event keynoted by S. Somasegar and hosted on . "Visual Studio 2013 Update 1" (Visual Studio 2013.1) was released on January 20, 2014. Visual Studio 2013.1 is a targeted update that addresses some key areas of customer feedback. "Visual Studio 2013 Update 2" (Visual Studio 2013.2) was released on May 12, 2014. Visual Studio 2013 Update 3 was released on August 4, 2014. With this update, Visual Studio provides an option to disable the all-caps menus, which was introduced in VS2012. "Visual Studio 2013 Update 4" (Visual Studio 2013.4) was released on November 12, 2014. "Visual Studio 2013 Update 5" (Visual Studio 2013.5) was released on July 20, 2015. Visual Studio 2013 also adds support for Windows 8.1 and Windows Server 2012 R2. 2015 Initially referred to as Visual Studio "14", the first Community Technology Preview (CTP) was released on June 3, 2014 and the Release Candidate was released on April 29, 2015; Visual Studio 2015 was officially announced as the final name on November 12, 2014. Visual Studio 2015 RTM was released on July 20, 2015. Visual Studio 2015 Update 1 was released on November 30, 2015. Visual Studio 2015 Update 2 was released on March 30, 2016. Visual Studio 2015 Update 3 was released on June 27, 2016. Visual Studio 2015 is the first version to support Windows 10 and the last version to support Windows 8, Windows Server 2008 R2 SP1 and Windows Server 2012; it's also the last version to support targeting Windows XP SP3, Windows Server 2003 SP2, Windows Vista SP2 and Windows Server 2008 SP2 for C++ applications. 2017 Initially referred to as Visual Studio "15", it was released on March 7, 2017. The first Preview was released on March 30, 2016. Visual Studio "15" Preview 2 was released May 10, 2016. Visual Studio "15" Preview 3 was released on July 7, 2016. Visual Studio "15" Preview 4 was released on August 22, 2016. Visual Studio "15" Preview 5 was released on October 5, 2016. On November 14, 2016, for a brief period of time, Microsoft released a blog post revealing Visual Studio 2017 product name version alongside upcoming features. On November 16, 2016, "Visual Studio 2017" was announced as the final name, and Visual Studio 2017 RC was released. On March 7, 2017, Visual Studio 2017 was released for general availability. It requires Windows 7 SP1, Windows 8.1 with KB2919355 or Windows Server 2012 R2 with KB2919355 at the minimum, and also added support for Windows Server 2016. On March 14, 2017, first fix was released for Visual Studio 2017 due to failures during installation or opening solutions in the first release. On April 5, 2017, Visual Studio 2017 15.1 was released and added support for targeting the .NET Framework 4.7. On May 10, 2017, Visual Studio 2017 15.2 was released and added a new workload, "Data Science and Analytical Applications Workload". An update to fix the dark color theme was released on May 12, 2017. On August 14, 2017, Visual Studio 2017 15.3 was released and added support for targeting .NET Core 2.0. An update (15.3.1) was released four days later to address a Git vulnerability with submodules (CVE 2017-1000117). On October 10, 2017, Visual Studio 15.4 was released. On December 4, 2017, Visual Studio 15.5 was released. This update contained major performance improvements, new features, as well as bug fixes. On March 6, 2018, Visual Studio 15.6 was released. It includes updates to unit testing and performance. On May 7, 2018, Visual Studio 15.7 was released. It included updates across the board including, the installer, editor, debugger among others. Almost all point releases, the latest of which is 15.7.6 released August 2, 2018, include security updates. With the release of Visual Studio 2017 15.7, Visual C++ now conforms to the C++17 standard. On September 20, 2018, Visual Studio 15.8.5 was released. Tools for Xamarin now supports Xcode 10. On November 15, 2018, Visual Studio 2017 15.9 was released and support for targeting ARM64 for Windows 10 was provided. Previously only ARM32 was supported as a target. Visual Studio 2017 offers new features like support for EditorConfig (a coding style enforcement framework), NGen support, .NET Core and Docker toolset (Preview), and Xamarin 4.3 (Preview). It also has a XAML Editor, improved IntelliSense, live unit testing, debugging enhancement and better IDE experience and productivity. Additionally, it is the last version of Visual Studio to support maintaining Windows 10 Mobile projects. 2019 On June 6, 2018, Microsoft announced Visual Studio 2019 (version 16). On December 4, 2018, Visual Studio 2019 Preview 1 was released. On January 24, 2019, Visual Studio 2019 Preview 2 was released. On February 13, 2019, Visual Studio 2019 Preview 3 was released. On February 27, 2019, Visual Studio 2019 RC was released while setting April 2, 2019 for its general availability. It is generally available (GA) since April 2, 2019 and available for download. On September 23, 2019, Visual Studio 2019 16.3 was released and added support for targeting the .NET Framework 4.8. Visual Studio 2019 is the first version of Visual Studio to support Windows 11, and also requires Windows 7 SP1, Windows 8.1 with KB2919355, Windows Server 2012 R2 with KB2919355 or Windows 10, version 1703 at the minimum. It is the last 32-bit version of Visual Studio as later versions are only 64-bit. It is also the last version to support Windows 7 SP1, Windows 8.1 and Windows Server 2012 R2, with later versions requiring at least Windows 10 and Windows Server 2016. 2022 On April 19, 2021, Microsoft announced Visual Studio 2022 (version 17). It is the first version to run as a 64-bit process allowing Visual Studio main process to access more than 4 GB of memory, preventing out-of-memory exceptions which could occur with large projects. On June 17, 2021, Visual Studio 2022 Preview 1 was released. On July 14, 2021, Visual Studio 2022 Preview 2 was released. On August 10, 2021, Visual Studio 2022 Preview 3 was released. On September 14, 2021, Visual Studio 2022 Preview 4 was released. On October 12, 2021, Visual Studio 2022 RC and Preview 5 was released while setting November 8, 2021 for its general availability. It is generally available (GA) since November 8, 2021 and available for download. It is available only for Windows 10 and Windows Server 2016 or later, and also supports Windows Server 2022. On August 9, 2022, Visual Studio 17.3 was released and added support for targeting the .NET Framework 4.8.1. On November 8, 2022, Visual Studio 17.4 was released and provided an ARM64 native version of the compiler itself, not just the ability to target ARM from x86/x64 (real or emulated on ARM64). Related products Azure DevOps Services On November 13, 2013, Microsoft announced the release of a software as a service offering of Visual Studio on Microsoft Azure platform; at the time, Microsoft called it Visual Studio Online. Previously announced as Team Foundation Services, it expanded over the on-premises Team Foundation Server (TFS; now known as Azure DevOps Server) by making it available on the Internet and implementing a rolling release model. Customers could use Azure portal to subscribe to Visual Studio Online. Subscribers receive a hosted Git-compatible version control system, a load-testing service, a telemetry service and an in-browser code editor codenamed "Monaco". During the developer event on November 18, 2015, Microsoft announced that the service was rebranded as "Visual Studio Team Services (VSTS)". On September 10, 2018, Microsoft announced another rebranding of the service, this time to "Azure DevOps Services". Microsoft offers Stakeholder, Basic, and Visual Studio subscriber access levels for Azure DevOps Services. The Basic plan is free of charge for up to five users. Users with a Visual Studio subscription can be added to a plan with no additional charge. Visual Studio Application Lifecycle Management Visual Studio Application Lifecycle Management (ALM) is a collection of integrated software development tools developed by Microsoft. These tools currently consist of the IDE (Visual Studio 2015 Community and greater editions), server (Team Foundation Server), and cloud services (Visual Studio Team Services). Visual Studio ALM supports team-based development and collaboration, Agile project management, DevOps, source control, packaging, continuous development, automated testing, release management, continuous delivery, and reporting tools for apps and services. In Visual Studio 2005 and Visual Studio 2008, the brand was known as Microsoft Visual Studio Team System (VSTS). In October 2009, the Team System brand was renamed Visual Studio ALM with the Visual Studio 2010 (codenamed 'Rosario') release. Visual Studio Team Services debuted as Visual Studio Online in 2013 and was renamed in 2015. Visual Studio Lab Management Visual Studio Lab Management is a software development tool developed by Microsoft for software testers to create and manage virtual environments. Lab Management extends the existing Visual Studio Application Lifecycle Management platform to enable an integrated Hyper-V based test lab. Since Visual Studio 2012, it is already shipped as a part of it; and, can be set up after Azure DevOps and SCVMM are integrated. Visual Studio LightSwitch Microsoft Visual Studio LightSwitch is an extension and framework specifically tailored for creating line-of-business applications built on existing .NET technologies and Microsoft platforms. The applications produced are architecturally 3-tier: the user interface runs on either Microsoft Silverlight or HTML 5 client, or as a SharePoint 2013 app; the logic and data-access tier is built on WCF Data Services and exposed as an OData feed hosted in ASP.NET; and the primary data storage supports Microsoft SQL Server Express, Microsoft SQL Server and Microsoft SQL Azure. LightSwitch also supports other data sources including Microsoft SharePoint, OData and WCF RIA Services. LightSwitch includes graphical designers for designing entities and entity relationships, entity queries, and UI screens. Business logic may be written in either Visual Basic or Visual C#. LightSwitch is included with Visual Studio 2012 Professional and higher. Visual Studio 2015 is the last release of Visual Studio that includes the LightSwitch tooling. The user interface layer is now an optional component when deploying a LightSwitch solution, allowing a service-only deployment. The first version of Visual Studio LightSwitch, released July 26, 2011, had many differences from the current release of LightSwitch. Notably the tool was purchased and installed as a stand-alone product. If Visual Studio 2010 Professional or higher was already installed on the machine, LightSwitch would integrate into that. The second major difference was the middle tier was built and exposed using WCF RIA Services. As of October 14, 2016, Microsoft no longer recommends LightSwitch for new application development. Visual Studio Code Visual Studio Code is a freeware source code editor, along with other features, for Linux, Mac OS, and Windows. It also includes support for debugging and embedded Git Control. It is built on open-source, and on April 14, 2016, version 1.0 was released. Visual Studio Team System Profiler Visual Studio Team System Profiler (VSTS Profiler) is a tool to analyze the performance of .NET projects that analyzes the space and time complexity of the program. It analyzes the code and prepares a report that includes CPU sampling, instrumentation, .NET memory allocation and resource contention.
Technology
Development
null
3265197
https://en.wikipedia.org/wiki/Plasma%20recombination
Plasma recombination
Plasma recombination is a process by which positive ions of a plasma capture a free (energetic) electron and combine with electrons or negative ions to form new neutral atoms (gas). The process of recombination can be described as the reverse of ionization, whereby conditions allow the plasma to evert to a gas. Recombination is an exothermic process, meaning that the plasma releases some of its internal energy, usually in the form of heat. Except for plasma composed of pure hydrogen (or its isotopes), there may also be multiply charged ions. Therefore, a single electron capture results in decrease of the ion charge, but not necessarily in a neutral atom or molecule. Recombination usually takes place in the whole volume of a plasma (volume recombination), although in some cases it is confined to some region of the volume. Each kind of reaction is called a recombining mode and their individual rates are strongly affected by the properties of the plasma such as its energy (heat), density of each species, pressure and temperature of the surrounding environment. Examples An everyday example of rapid plasma recombination occurs when a fluorescent lamp is switched off. The low-density plasma in the lamp (which generates the light by bombardment of the fluorescent coating on the inside of the glass wall) recombines in a fraction of a second after the plasma-generating electric field is removed by switching off the electric power source. Hydrogen recombination modes are of vital importance in the development of divertor regions for tokamak reactors. In fact they will provide a good way for extracting the energy produced in the core of the plasma. At the present time, it is believed that the most likely plasma losses observed in the recombining region are due to two different modes: electron ion recombination (EIR) and molecular activated recombination (MAR). Table
Physical sciences
Phase transitions
Physics
3268249
https://en.wikipedia.org/wiki/Quicksort
Quicksort
Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions. Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. For this reason, it is sometimes called partition-exchange sort. The sub-arrays are then sorted recursively. This can be done in-place, requiring small additional amounts of memory to perform the sorting. Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation (formally, a total order) is defined. It is a comparison-based sort since elements a and b are only swapped in case their relative order has been obtained in the transitive closure of prior comparison-outcomes. Most implementations of quicksort are not stable, meaning that the relative order of equal sort items is not preserved. Mathematical analysis of quicksort shows that, on average, the algorithm takes comparisons to sort n items. In the worst case, it makes comparisons. History The quicksort algorithm was developed in 1959 by Tony Hoare while he was a visiting student at Moscow State University. At that time, Hoare was working on a machine translation project for the National Physical Laboratory. As a part of the translation process, he needed to sort the words in Russian sentences before looking them up in a Russian-English dictionary, which was in alphabetical order on magnetic tape. After recognizing that his first idea, insertion sort, would be slow, he came up with a new idea. He wrote the partition part in Mercury Autocode but had trouble dealing with the list of unsorted segments. On return to England, he was asked to write code for Shellsort. Hoare mentioned to his boss that he knew of a faster algorithm and his boss bet a sixpence that he did not. His boss ultimately accepted that he had lost the bet. Hoare published a paper about his algorithm in The Computer Journal Volume 5, Issue 1, 1962, Pages 10–16. Later, Hoare learned about ALGOL and its ability to do recursion that enabled him to publish an improved version of the algorithm in ALGOL in Communications of the Association for Computing Machinery, the premier computer science journal of the time. The ALGOL code is published in Communications of the ACM (CACM), Volume 4, Issue 7 July 1961, pp 321 Algorithm 63: partition and Algorithm 64: Quicksort. Quicksort gained widespread adoption, appearing, for example, in Unix as the default library sort subroutine. Hence, it lent its name to the C standard library subroutine and in the reference implementation of Java. Robert Sedgewick's PhD thesis in 1975 is considered a milestone in the study of Quicksort where he resolved many open problems related to the analysis of various pivot selection schemes including Samplesort, adaptive partitioning by Van Emden as well as derivation of expected number of comparisons and swaps. Jon Bentley and Doug McIlroy in 1993 incorporated various improvements for use in programming libraries, including a technique to deal with equal elements and a pivot scheme known as pseudomedian of nine, where a sample of nine elements is divided into groups of three and then the median of the three medians from three groups is chosen. Bentley described another simpler and compact partitioning scheme in his book Programming Pearls that he attributed to Nico Lomuto. Later Bentley wrote that he used Hoare's version for years but never really understood it but Lomuto's version was simple enough to prove correct. Bentley described Quicksort as the "most beautiful code I had ever written" in the same essay. Lomuto's partition scheme was also popularized by the textbook Introduction to Algorithms although it is inferior to Hoare's scheme because it does three times more swaps on average and degrades to runtime when all elements are equal. McIlroy would further produce an AntiQuicksort () function in 1998, which consistently drives even his 1993 variant of Quicksort into quadratic behavior by producing adversarial data on-the-fly. Algorithm Quicksort is a type of divide-and-conquer algorithm for sorting an array, based on a partitioning routine; the details of this partitioning can vary somewhat, so that quicksort is really a family of closely related algorithms. Applied to a range of at least two elements, partitioning produces a division into two consecutive non empty sub-ranges, in such a way that no element of the first sub-range is greater than any element of the second sub-range. After applying this partition, quicksort then recursively sorts the sub-ranges, possibly after excluding from them an element at the point of division that is at this point known to be already in its final location. Due to its recursive nature, quicksort (like the partition routine) has to be formulated so as to be callable for a range within a larger array, even if the ultimate goal is to sort a complete array. The steps for in-place quicksort are: If the range has fewer than two elements, return immediately as there is nothing to do. Possibly for other very short lengths a special-purpose sorting method is applied and the remainder of these steps skipped. Otherwise pick a value, called a pivot, that occurs in the range (the precise manner of choosing depends on the partition routine, and can involve randomness). Partition the range: reorder its elements, while determining a point of division, so that all elements with values less than the pivot come before the division, while all elements with values greater than the pivot come after it; elements that are equal to the pivot can go either way. Since at least one instance of the pivot is present, most partition routines ensure that the value that ends up at the point of division is equal to the pivot, and is now in its final position (but termination of quicksort does not depend on this, as long as sub-ranges strictly smaller than the original are produced). Recursively apply quicksort to the sub-range up to the point of division and to the sub-range after it, possibly excluding from both ranges the element equal to the pivot at the point of division. (If the partition produces a possibly larger sub-range near the boundary where all elements are known to be equal to the pivot, these can be excluded as well.) The choice of partition routine (including the pivot selection) and other details not entirely specified above can affect the algorithm's performance, possibly to a great extent for specific input arrays. In discussing the efficiency of quicksort, it is therefore necessary to specify these choices first. Here we mention two specific partition methods. Lomuto partition scheme This scheme is attributed to Nico Lomuto and popularized by Bentley in his book Programming Pearls and Cormen et al. in their book Introduction to Algorithms. In most formulations this scheme chooses as the pivot the last element in the array. The algorithm maintains index as it scans the array using another index such that the elements at through (inclusive) are less than the pivot, and the elements at through (inclusive) are equal to or greater than the pivot. As this scheme is more compact and easy to understand, it is frequently used in introductory material, although it is less efficient than Hoare's original scheme e.g., when all elements are equal. The complexity of Quicksort with this scheme degrades to when the array is already in order, due to the partition being the worst possible one. There have been various variants proposed to boost performance including various ways to select the pivot, deal with equal elements, use other sorting algorithms such as insertion sort for small arrays, and so on. In pseudocode, a quicksort that sorts elements at through (inclusive) of an array can be expressed as: // Sorts (a portion of) an array, divides it into partitions, then sorts those algorithm quicksort(A, lo, hi) is // Ensure indices are in correct order if lo >= hi || lo < 0 then return // Partition array and get the pivot index p := partition(A, lo, hi) // Sort the two partitions quicksort(A, lo, p - 1) // Left side of pivot quicksort(A, p + 1, hi) // Right side of pivot // Divides array into two partitions algorithm partition(A, lo, hi) is pivot := A[hi] // Choose the last element as the pivot // Temporary pivot index i := lo for j := lo to hi - 1 do // If the current element is less than or equal to the pivot if A[j] <= pivot then // Swap the current element with the element at the temporary pivot index swap A[i] with A[j] // Move the temporary pivot index forward i := i + 1 // Swap the pivot with the last element swap A[i] with A[hi] return i // the pivot index Sorting the entire array is accomplished by . Hoare partition scheme The original partition scheme described by Tony Hoare uses two pointers (indices into the range) that start at both ends of the array being partitioned, then move toward each other, until they detect an inversion: a pair of elements, one greater than the pivot at the first pointer, and one less than the pivot at the second pointer; if at this point the first pointer is still before the second, these elements are in the wrong order relative to each other, and they are then exchanged. After this the pointers are moved inwards, and the search for an inversion is repeated; when eventually the pointers cross (the first points after the second), no exchange is performed; a valid partition is found, with the point of division between the crossed pointers (any entries that might be strictly between the crossed pointers are equal to the pivot and can be excluded from both sub-ranges formed). With this formulation it is possible that one sub-range turns out to be the whole original range, which would prevent the algorithm from advancing. Hoare therefore stipulates that at the end, the sub-range containing the pivot element (which still is at its original position) can be decreased in size by excluding that pivot, after (if necessary) exchanging it with the sub-range element closest to the separation; thus, termination of quicksort is ensured. With respect to this original description, implementations often make minor but important variations. Notably, the scheme as presented below includes elements equal to the pivot among the candidates for an inversion (so "greater than or equal" and "less than or equal" tests are used instead of "greater than" and "less than" respectively; since the formulation uses rather than which is actually reflected by the use of strict comparison operators). While there is no reason to exchange elements equal to the pivot, this change allows tests on the pointers themselves to be omitted, which are otherwise needed to ensure they do not run out of range. Indeed, since at least one instance of the pivot value is present in the range, the first advancement of either pointer cannot pass across this instance if an inclusive test is used; once an exchange is performed, these exchanged elements are now both strictly ahead of the pointer that found them, preventing that pointer from running off. (The latter is true independently of the test used, so it would be possible to use the inclusive test only when looking for the first inversion. However, using an inclusive test throughout also ensures that a division near the middle is found when all elements in the range are equal, which gives an important efficiency gain for sorting arrays with many equal elements.) The risk of producing a non-advancing separation is avoided in a different manner than described by Hoare. Such a separation can only result when no inversions are found, with both pointers advancing to the pivot element at the first iteration (they are then considered to have crossed, and no exchange takes place). In pseudocode, // Sorts (a portion of) an array, divides it into partitions, then sorts those algorithm quicksort(A, lo, hi) is if lo >= 0 && hi >= 0 && lo < hi then p := partition(A, lo, hi) quicksort(A, lo, p) // Note: the pivot is now included quicksort(A, p + 1, hi) // Divides array into two partitions algorithm partition(A, lo, hi) is // Pivot value pivot := A[lo] // Choose the first element as the pivot // Left index i := lo - 1 // Right index j := hi + 1 loop forever // Move the left index to the right at least once and while the element at // the left index is less than the pivot do i := i + 1 while A[i] < pivot // Move the right index to the left at least once and while the element at // the right index is greater than the pivot do j := j - 1 while A[j] > pivot // If the indices crossed, return if i >= j then return j // Swap the elements at the left and right indices swap A[i] with A[j] The entire array is sorted by . Hoare's scheme is more efficient than Lomuto's partition scheme because it does three times fewer swaps on average. Also, as mentioned, the implementation given creates a balanced partition even when all values are equal., which Lomuto's scheme does not. Like Lomuto's partition scheme, Hoare's partitioning also would cause Quicksort to degrade to for already sorted input, if the pivot was chosen as the first or the last element. With the middle element as the pivot, however, sorted data results with (almost) no swaps in equally sized partitions leading to best case behavior of Quicksort, i.e. . Like others, Hoare's partitioning doesn't produce a stable sort. In this scheme, the pivot's final location is not necessarily at the index that is returned, as the pivot and elements equal to the pivot can end up anywhere within the partition after a partition step, and may not be sorted until the base case of a partition with a single element is reached via recursion. Therefore, the next two segments that the main algorithm recurs on are (elements ≤ pivot) and (elements ≥ pivot) as opposed to and as in Lomuto's scheme. Subsequent recursions (expansion on previous paragraph) Let's expand a little bit on the next two segments that the main algorithm recurs on. Because we are using strict comparators (>, <) in the loops to prevent ourselves from running out of range, there's a chance that the pivot itself gets swapped with other elements in the partition function. Therefore, the index returned in the partition function isn't necessarily where the actual pivot is. Consider the example of , following the scheme, after the first partition the array becomes , the "index" returned is 2, which is the number 1, when the real pivot, the one we chose to start the partition with was the number 3. With this example, we see how it is necessary to include the returned index of the partition function in our subsequent recursions. As a result, we are presented with the choices of either recursing on and , or and . Which of the two options we choose depends on which index (i or j) we return in the partition function when the indices cross, and how we choose our pivot in the partition function (floor v.s. ceiling). Let's first examine the choice of recursing on and , with the example of sorting an array where multiple identical elements exist . If index i (the "latter" index) is returned after indices cross in the partition function, the index 1 would be returned after the first partition. The subsequent recursion on would be on (0, 1), which corresponds to the exact same array . A non-advancing separation that causes infinite recursion is produced. It is therefore obvious that when recursing on and , because the left half of the recursion includes the returned index, it is the partition function's job to exclude the "tail" in non-advancing scenarios. Which is to say, index j (the "former" index when indices cross) should be returned instead of i. Going with a similar logic, when considering the example of an already sorted array , the choice of pivot needs to be "floor" to ensure that the pointers stop on the "former" instead of the "latter" (with "ceiling" as the pivot, the index 1 would be returned and included in causing infinite recursion). It is for the exact same reason why choice of the last element as pivot must be avoided. The choice of recursing on and follows the exact same logic as above. Because the right half of the recursion includes the returned index, it is the partition function's job to exclude the "head" in non-advancing scenarios. The index i (the "latter" index after the indices cross) in the partition function needs to be returned, and "ceiling" needs to be chosen as the pivot. The two nuances are clear, again, when considering the examples of sorting an array where multiple identical elements exist (), and an already sorted array respectively. It is noteworthy that with version of recursion, for the same reason, choice of the first element as pivot must be avoided. Implementation issues Choice of pivot In the very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which is a rather common use-case. The problem was easily solved by choosing either a random index for the pivot, choosing the middle index of the partition or (especially for longer partitions) choosing the median of the first, middle and last element of the partition for the pivot (as recommended by Sedgewick). This "median-of-three" rule counters the case of sorted (or reverse-sorted) input, and gives a better estimate of the optimal pivot (the true median) than selecting any single element, when no information about the ordering of the input is known. Median-of-three code snippet for Lomuto partition: mid := ⌊(lo + hi) / 2⌋ if A[mid] < A[lo] swap A[lo] with A[mid] if A[hi] < A[lo] swap A[lo] with A[hi] if A[mid] < A[hi] swap A[mid] with A[hi] pivot := A[hi] It puts a median into A[hi] first, then that new value of A[hi] is used for a pivot, as in a basic algorithm presented above. Specifically, the expected number of comparisons needed to sort elements (see ) with random pivot selection is . Median-of-three pivoting brings this down to , at the expense of a three-percent increase in the expected number of swaps. An even stronger pivoting rule, for larger arrays, is to pick the ninther, a recursive median-of-three (Mo3), defined as Selecting a pivot element is also complicated by the existence of integer overflow. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, , will cause overflow and provide an invalid pivot index. This can be overcome by using, for example, to index the middle element, at the cost of more complex arithmetic. Similar issues arise in some other methods of selecting the pivot element. Repeated elements With a partitioning algorithm such as the Lomuto partition scheme described above (even one that chooses good pivot values), quicksort exhibits poor performance for inputs that contain many repeated elements. The problem is clearly apparent when all the input elements are equal: at each recursion, the left partition is empty (no input values are less than the pivot), and the right partition has only decreased by one element (the pivot is removed). Consequently, the Lomuto partition scheme takes quadratic time to sort an array of equal values. However, with a partitioning algorithm such as the Hoare partition scheme, repeated elements generally results in better partitioning, and although needless swaps of elements equal to the pivot may occur, the running time generally decreases as the number of repeated elements increases (with memory cache reducing the swap overhead). In the case where all elements are equal, Hoare partition scheme needlessly swaps elements, but the partitioning itself is best case, as noted in the Hoare partition section above. To solve the Lomuto partition scheme problem (sometimes called the Dutch national flag problem), an alternative linear-time partition routine can be used that separates the values into three groups: values less than the pivot, values equal to the pivot, and values greater than the pivot. (Bentley and McIlroy call this a "fat partition" and it was already implemented in the of Version 7 Unix.) The values equal to the pivot are already sorted, so only the less-than and greater-than partitions need to be recursively sorted. In pseudocode, the quicksort algorithm becomes: // Sorts (a portion of) an array, divides it into partitions, then sorts those algorithm quicksort(A, lo, hi) is if lo >= 0 && lo < hi then lt, gt := partition(A, lo, hi) // Multiple return values quicksort(A, lo, lt - 1) quicksort(A, gt + 1, hi) // Divides array into three partitions algorithm partition(A, lo, hi) is // Pivot value pivot := A[(lo + hi) / 2] // Choose the middle element as the pivot (integer division) // Lesser, equal and greater index lt := lo eq := lo gt := hi // Iterate and compare all elements with the pivot while eq <= gt do if A[eq] < pivot then // Swap the elements at the equal and lesser indices swap A[eq] with A[lt] // Increase lesser index lt := lt + 1 // Increase equal index eq := eq + 1 else if A[eq] > pivot then // Swap the elements at the equal and greater indices swap A[eq] with A[gt] // Decrease greater index gt := gt - 1 else // if A[eq] = pivot then // Increase equal index eq := eq + 1 // Return lesser and greater indices return lt, gt The partition algorithm returns indices to the first ('leftmost') and to the last ('rightmost') item of the middle partition. Every other item of the partition is equal to the pivot and is therefore sorted. Consequently, the items of the partition need not be included in the recursive calls to quicksort. The best case for the algorithm now occurs when all elements are equal (or are chosen from a small set of elements). In the case of all equal elements, the modified quicksort will perform only two recursive calls on empty subarrays and thus finish in linear time (assuming the partition subroutine takes no longer than linear time). Optimizations Other important optimizations, also suggested by Sedgewick and widely used in practice, are: To make sure at most space is used, recur first into the smaller side of the partition, then use a tail call to recur into the other, or update the parameters to no longer include the now sorted smaller side, and iterate to sort the larger side. When the number of elements is below some threshold (perhaps ten elements), switch to a non-recursive sorting algorithm such as insertion sort that performs fewer swaps, comparisons or other operations on such small arrays. The ideal 'threshold' will vary based on the details of the specific implementation. An older variant of the previous optimization: when the number of elements is less than the threshold , simply stop; then after the whole array has been processed, perform insertion sort on it. Stopping the recursion early leaves the array -sorted, meaning that each element is at most positions away from its final sorted position. In this case, insertion sort takes time to finish the sort, which is linear if is a constant. Compared to the "many small sorts" optimization, this version may execute fewer instructions, but it makes suboptimal use of the cache memories in modern computers. Parallelization Quicksort's divide-and-conquer formulation makes it amenable to parallelization using task parallelism. The partitioning step is accomplished through the use of a parallel prefix sum algorithm to compute an index for each array element in its section of the partitioned array. Given an array of size , the partitioning step performs work in time and requires additional scratch space. After the array has been partitioned, the two partitions can be sorted recursively in parallel. Assuming an ideal choice of pivots, parallel quicksort sorts an array of size in work in time using additional space. Quicksort has some disadvantages when compared to alternative sorting algorithms, like merge sort, which complicate its efficient parallelization. The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Additionally, it is difficult to parallelize the partitioning step efficiently in-place. The use of scratch space simplifies the partitioning step, but increases the algorithm's memory footprint and constant overheads. Other more sophisticated parallel sorting algorithms can achieve even better time bounds. For example, in 1991 David M W Powers described a parallelized quicksort (and a related radix sort) that can operate in time on a CRCW (concurrent read and concurrent write) PRAM (parallel random-access machine) with processors by performing partitioning implicitly. Formal analysis Worst-case analysis The most unbalanced partition occurs when one of the sublists returned by the partitioning routine is of size . This may occur if the pivot happens to be the smallest or largest element in the list, or in some implementations (e.g., the Lomuto partition scheme as described above) when all the elements are equal. If this happens repeatedly in every partition, then each recursive call processes a list of size one less than the previous list. Consequently, we can make nested calls before we reach a list of size 1. This means that the call tree is a linear chain of nested calls. The th call does work to do the partition, and , so in that case quicksort takes time. Best-case analysis In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only nested calls before we reach a list of size 1. This means that the depth of the call tree is . But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only time all together (each call has some constant overhead, but since there are only calls at each level, this is subsumed in the factor). The result is that the algorithm uses only time. Average-case analysis To sort an array of distinct elements, quicksort takes time in expectation, averaged over all permutations of elements with equal probability. Alternatively, if the algorithm selects the pivot uniformly at random from the input array, the same analysis can be used to bound the expected running time for any input sequence; the expectation is then taken over the random choices made by the algorithm (Cormen et al., Introduction to Algorithms, Section 7.3). We list here three common proofs to this claim providing different insights into quicksort's workings. Using percentiles If each pivot has rank somewhere in the middle 50 percent, that is, between the 25th percentile and the 75th percentile, then it splits the elements with at least 25% and at most 75% on each side. If we could consistently choose such pivots, we would only have to split the list at most times before reaching lists of size 1, yielding an algorithm. When the input is a random permutation, the pivot has a random rank, and so it is not guaranteed to be in the middle 50 percent. However, when we start from a random permutation, in each recursive call the pivot has a random rank in its list, and so it is in the middle 50 percent about half the time. That is good enough. Imagine that a coin is flipped: heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't. Now imagine that the coin is flipped over and over until it gets heads. Although this could take a long time, on average only flips are required, and the chance that the coin won't get heads after flips is highly improbable (this can be made rigorous using Chernoff bounds). By the same argument, Quicksort's recursion will terminate on average at a call depth of only . But if its average call depth is , and each level of the call tree processes at most elements, the total amount of work done on average is the product, . The algorithm does not have to verify that the pivot is in the middle half—if we hit it any constant fraction of the times, that is enough for the desired complexity. Using recurrences An alternative approach is to set up a recurrence relation for the factor, the time needed to sort a list of size . In the most unbalanced case, a single quicksort call involves work plus two recursive calls on lists of size and , so the recurrence relation is This is the same relation as for insertion sort and selection sort, and it solves to worst case . In the most balanced case, a single quicksort call involves work plus two recursive calls on lists of size , so the recurrence relation is The master theorem for divide-and-conquer recurrences tells us that . The outline of a formal proof of the expected time complexity follows. Assume that there are no duplicates as duplicates could be handled with linear time pre- and post-processing, or considered cases easier than the analyzed. When the input is a random permutation, the rank of the pivot is uniform random from 0 to . Then the resulting parts of the partition have sizes and , and i is uniform random from 0 to . So, averaging over all possible splits and noting that the number of comparisons for the partition is , the average number of comparisons over all permutations of the input sequence can be estimated accurately by solving the recurrence relation: Solving the recurrence gives . This means that, on average, quicksort performs only about 39% worse than in its best case. In this sense, it is closer to the best case than the worst case. A comparison sort cannot use less than comparisons on average to sort items (as explained in the article Comparison sort) and in case of large , Stirling's approximation yields , so quicksort is not much worse than an ideal comparison sort. This fast average runtime is another reason for quicksort's practical dominance over other sorting algorithms. Using a binary search tree The following binary search tree (BST) corresponds to each execution of quicksort: the initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right half is the root of the right subtree, and so on. The number of comparisons of the execution of quicksort equals the number of comparisons during the construction of the BST by a sequence of insertions. So, the average number of comparisons for randomized quicksort equals the average cost of constructing a BST when the values inserted form a random permutation. Consider a BST created by insertion of a sequence of values forming a random permutation. Let denote the cost of creation of the BST. We have , where is a binary random variable expressing whether during the insertion of there was a comparison to . By linearity of expectation, the expected value of is . Fix and . The values , once sorted, define intervals. The core structural observation is that is compared to in the algorithm if and only if falls inside one of the two intervals adjacent to . Observe that since is a random permutation, is also a random permutation, so the probability that is adjacent to is exactly . We end with a short calculation: Space complexity The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of , even in the worst case, when it is carefully implemented using the following strategies. In-place partitioning is used. This unstable partition requires space. After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. This idea, as discussed above, was described by R. Sedgewick, and keeps the stack depth bounded by . Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most nested recursive calls, it uses space. However, without Sedgewick's trick to limit the recursive calls, in the worst case quicksort could make nested recursive calls and need auxiliary space. From a bit complexity viewpoint, variables such as lo and hi do not use constant space; it takes bits to index into a list of items. Because there are such variables in every stack frame, quicksort using Sedgewick's trick requires bits of space. This space requirement isn't too terrible, though, since if the list contained distinct elements, it would need at least bits of space. Another, less common, not-in-place, version of quicksort uses space for working storage and can implement a stable sort. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. Sedgewick's optimization is still appropriate. Relation to other algorithms Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order. An often desirable property of a sorting algorithm is stability – that is the order of elements that compare equal is not changed, allowing controlling order of multikey tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain for in-place quicksort (that uses only constant additional space for pointers and buffers, and additional space for the management of explicit or implicit recursion). For variant quicksorts involving extra memory due to representations using pointers (e.g. lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex, or disk-bound, data structures tend to increase time cost, in general making increasing use of virtual memory or disk. The most direct competitor of quicksort is heapsort. Heapsort has the advantages of simplicity, and a worst case run time of , but heapsort's average running time is usually considered slower than in-place quicksort, primarily due to its worse locality of reference. This result is debatable; some publications indicate the opposite. The main disadvantage of quicksort is the implementation complexity required to avoid bad pivot choices and the resultant performance. Introsort is a variant of quicksort which solves this problem by switching to heapsort when a bad case is detected. Major programming languages, such as C++ (in the GNU and LLVM implementations), use introsort. Quicksort also competes with merge sort, another sorting algorithm. Merge sort's main advantages are that it is a stable sort and has excellent worst-case performance. The main disadvantage of merge sort is that it is an out-of-place algorithm, so when operating on arrays, efficient implementations require auxiliary space (vs. for quicksort with in-place partitioning and tail recursion, or for heapsort). Merge sort works very well on linked lists, requiring only a small, constant amount of auxiliary storage. Although quicksort can be implemented as a stable sort using linked lists, there is no reason to; it will often suffer from poor pivot choices without random access, and is essentially always inferior to merge sort. Merge sort is also the algorithm of choice for external sorting of very large data sets stored on slow-to-access media such as disk storage or network-attached storage. Bucket sort with two buckets is very similar to quicksort; the pivot in this case is effectively the value in the middle of the value range, which does well on average for uniformly distributed inputs. Selection-based pivoting A selection algorithm chooses the th smallest of a list of numbers; this is an easier problem in general than sorting. One simple but effective selection algorithm works nearly in the same manner as quicksort, and is accordingly known as quickselect. The difference is that instead of making recursive calls on both sublists, it only makes a single tail-recursive call on the sublist that contains the desired element. This change lowers the average complexity to linear or time, which is optimal for selection, but the selection algorithm is still in the worst case. A variant of quickselect, the median of medians algorithm, chooses pivots more carefully, ensuring that the pivots are near the middle of the data (between the 30th and 70th percentiles), and thus has guaranteed linear time – . This same pivot strategy can be used to construct a variant of quicksort (median of medians quicksort) with time. However, the overhead of choosing the pivot is significant, so this is generally not used in practice. More abstractly, given an selection algorithm, one can use it to find the ideal pivot (the median) at every step of quicksort and thus produce a sorting algorithm with running time. Practical implementations of this variant are considerably slower on average, but they are of theoretical interest because they show an optimal selection algorithm can yield an optimal sorting algorithm. Variants Multi-pivot quicksort Instead of partitioning into two subarrays using a single pivot, multi-pivot quicksort (also multiquicksort) partitions its input into some number of subarrays using pivots. While the dual-pivot case () was considered by Sedgewick and others already in the mid-1970s, the resulting algorithms were not faster in practice than the "classical" quicksort. A 1999 assessment of a multiquicksort with a variable number of pivots, tuned to make efficient use of processor caches, found it to increase the instruction count by some 20%, but simulation results suggested that it would be more efficient on very large inputs. A version of dual-pivot quicksort developed by Yaroslavskiy in 2009 turned out to be fast enough to warrant implementation in Java 7, as the standard algorithm to sort arrays of primitives (sorting arrays of objects is done using Timsort). The performance benefit of this algorithm was subsequently found to be mostly related to cache performance, and experimental results indicate that the three-pivot variant may perform even better on modern machines. External quicksort For disk files, an external sort based on partitioning similar to quicksort is possible. It is slower than external merge sort, but doesn't require extra disk space. 4 buffers are used, 2 for input, 2 for output. Let number of records in the file, the number of records per buffer, and the number of buffer segments in the file. Data is read (and written) from both ends of the file inwards. Let represent the segments that start at the beginning of the file and represent segments that start at the end of the file. Data is read into the and read buffers. A pivot record is chosen and the records in the and buffers other than the pivot record are copied to the write buffer in ascending order and write buffer in descending order based comparison with the pivot record. Once either or buffer is filled, it is written to the file and the next or buffer is read from the file. The process continues until all segments are read and one write buffer remains. If that buffer is an write buffer, the pivot record is appended to it and the buffer written. If that buffer is a write buffer, the pivot record is prepended to the buffer and the buffer written. This constitutes one partition step of the file, and the file is now composed of two subfiles. The start and end positions of each subfile are pushed/popped to a stand-alone stack or the main stack via recursion. To limit stack space to , the smaller subfile is processed first. For a stand-alone stack, push the larger subfile parameters onto the stack, iterate on the smaller subfile. For recursion, recurse on the smaller subfile first, then iterate to handle the larger subfile. Once a sub-file is less than or equal to 4 B records, the subfile is sorted in-place via quicksort and written. That subfile is now sorted and in place in the file. The process is continued until all sub-files are sorted and in place. The average number of passes on the file is approximately , but worst case pattern is passes (equivalent to for worst case internal sort). Three-way radix quicksort This algorithm is a combination of radix sort and quicksort. Pick an element from the array (the pivot) and consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Given we sort using bytes or words of length bits, the best case is and the worst case or at least as for standard quicksort, given for unique keys , and is a hidden constant in all standard comparison sort algorithms including quicksort. This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot. Quick radix sort Also developed by Powers as an parallel PRAM algorithm. This is again a combination of radix sort and quicksort but the quicksort left/right partition decision is made on successive bits of the key, and is thus for -bit keys. All comparison sort algorithms implicitly assume the transdichotomous model with in , as if is smaller we can sort in time using a hash table or integer sorting. If but elements are unique within bits, the remaining bits will not be looked at by either quicksort or quick radix sort. Failing that, all comparison sorting algorithms will also have the same overhead of looking through relatively useless bits but quick radix sort will avoid the worst case behaviours of standard quicksort and radix quicksort, and will be faster even in the best case of those comparison algorithms under these conditions of . See Powers for further discussion of the hidden overheads in comparison, radix and parallel sorting. BlockQuicksort In any comparison-based sorting algorithm, minimizing the number of comparisons requires maximizing the amount of information gained from each comparison, meaning that the comparison results are unpredictable. This causes frequent branch mispredictions, limiting performance. BlockQuicksort rearranges the computations of quicksort to convert unpredictable branches to data dependencies. When partitioning, the input is divided into moderate-sized blocks (which fit easily into the data cache), and two arrays are filled with the positions of elements to swap. (To avoid conditional branches, the position is unconditionally stored at the end of the array, and the index of the end is incremented if a swap is needed.) A second pass exchanges the elements at the positions indicated in the arrays. Both loops have only one conditional branch, a test for termination, which is usually taken. The BlockQuicksort technique is incorporated into LLVM's C++ STL implementation, libcxx, providing a 50% improvement on random integer sequences. Pattern-defeating quicksort (pdqsort), a version of introsort, also incorporates this technique. Partial and incremental quicksort Several variants of quicksort exist that separate the smallest or largest elements from the rest of the input. Generalization Richard Cole and David C. Kandathil, in 2004, discovered a one-parameter family of sorting algorithms, called partition sorts, which on average (with all input orderings equally likely) perform at most comparisons (close to the information theoretic lower bound) and operations; at worst they perform comparisons (and also operations); these are in-place, requiring only additional space. Practical efficiency and smaller variance in performance were demonstrated against optimised quicksorts (of Sedgewick and Bentley-McIlroy).
Mathematics
Algorithms
null
3268926
https://en.wikipedia.org/wiki/Great%20Oxidation%20Event
Great Oxidation Event
The Great Oxidation Event (GOE) or Great Oxygenation Event, also called the Oxygen Catastrophe, Oxygen Revolution, Oxygen Crisis or Oxygen Holocaust, was a time interval during the Earth's Paleoproterozoic era when the Earth's atmosphere and shallow seas first experienced a rise in the concentration of free oxygen. This began approximately 2.460–2.426 Ga (billion years) ago during the Siderian period and ended approximately 2.060 Ga ago during the Rhyacian. Geological, isotopic and chemical evidence suggests that biologically produced molecular oxygen (dioxygen or O2) started to accumulate in the Archean prebiotic atmosphere due to microbial photosynthesis, and eventually changed it from a weakly reducing atmosphere practically devoid of oxygen into an oxidizing one containing abundant free oxygen, with oxygen levels being as high as 10% of modern atmospheric level by the end of the GOE. The appearance of highly reactive free oxygen, which can oxidize organic compounds (especially genetic materials) and thus is toxic to the then-mostly anaerobic biosphere, may have caused the extinction/extirpation of many early organisms on Earth – mostly archaeal colonies that used retinal to use green-spectrum light energy and power a form of anoxygenic photosynthesis (see Purple Earth hypothesis). Although the event is inferred to have constituted a mass extinction, due in part to the great difficulty in surveying microscopic organisms' abundances, and in part to the extreme age of fossil remains from that time, the Great Oxidation Event is typically not counted among conventional lists of "great extinctions", which are implicitly limited to the Phanerozoic eon. In any case, isotope geochemistry data from sulfate minerals have been interpreted to indicate a decrease in the size of the biosphere of >80% associated with changes in nutrient supplies at the end of the GOE. The GOE is inferred to have been caused by cyanobacteria, which evolved chlorophyll-based photosynthesis that releases dioxygen as a byproduct of water photolysis. The continually produced oxygen eventually depleted all the surface reducing capacity from ferrous iron, sulfur, hydrogen sulfide and atmospheric methane over nearly a billion years. The oxidative environmental change, compounded by a global glaciation, devastated the microbial mats around the Earth's surface. The subsequent adaptation of surviving archaea via symbiogenesis with aerobic proteobacteria (which went endosymbiont and became mitochondria) may have led to the rise of eukaryotic organisms and the subsequent evolution of multicellular life-forms. Early atmosphere The composition of the Earth's earliest atmosphere is not known with certainty. However, the bulk was likely nitrogen , and carbon dioxide , which are also the predominant nitrogen- and carbon-bearing gases produced by volcanism today. These are relatively inert gases. Oxygen, , meanwhile, was present in the atmosphere at just 0.001% of its present atmospheric level. The Sun shone at about 70% of its current brightness 4 billion years ago, but there is strong evidence that liquid water existed on Earth at the time. A warm Earth, in spite of a faint Sun, is known as the faint young Sun paradox. Either levels were much higher at the time, providing enough of a greenhouse effect to warm the Earth, or other greenhouse gases were present. The most likely such gas is methane, , which is a powerful greenhouse gas and was produced by early forms of life known as methanogens. Scientists continue to research how the Earth was warmed before life arose. An atmosphere of and with trace amounts of , , carbon monoxide (), and hydrogen () is described as a weakly reducing atmosphere. Such an atmosphere contains practically no oxygen. The modern atmosphere contains abundant oxygen (nearly 21%), making it an oxidizing atmosphere. The rise in oxygen is attributed to photosynthesis by cyanobacteria, which are thought to have evolved as early as 3.5 billion years ago. The current scientific understanding of when and how the Earth's atmosphere changed from a weakly reducing to a strongly oxidizing atmosphere largely began with the work of the American geologist Preston Cloud in the 1970s. Cloud observed that detrital sediments older than about 2 billion years contained grains of pyrite, uraninite, and siderite, all minerals containing reduced forms of iron or uranium that are not found in younger sediments because they are rapidly oxidized in an oxidizing atmosphere. He further observed that continental red beds, which get their color from the oxidized (ferric) mineral hematite, began to appear in the geological record at about this time. Banded iron formation largely disappears from the geological record at 1.85 Ga, after peaking at about 2.5 Ga. Banded iron formation can form only when abundant dissolved ferrous iron is transported into depositional basins, and an oxygenated ocean blocks such transport by oxidizing the iron to form insoluble ferric iron compounds. The end of the deposition of banded iron formation at 1.85 Ga is therefore interpreted as marking the oxygenation of the deep ocean. Heinrich Holland further elaborated these ideas through the 1980s, placing the main time interval of oxygenation between 2.2 and 1.9 Ga. Constraining the onset of atmospheric oxygenation has proven particularly challenging for geologists and geochemists. While there is a widespread consensus that initial oxygenation of the atmosphere happened sometime during the first half of the Paleoproterozoic, there is disagreement on the exact timing of this event. Scientific publications between 2016–2022 have differed in the inferred timing of the onset of atmospheric oxygenation by approximately 500 million years; estimates of 2.7 Ga, 2.501–2.434 Ga 2.501–2.225 Ga, 2.460–2.426 Ga, 2.430 Ga, 2.33 Ga, and 2.3 Ga have been given. Factors limiting calculations include an incomplete sedimentary record for the Paleoproterozoic (e.g., because of subduction and metamorphism), uncertainties in depositional ages for many ancient sedimentary units, and uncertainties related to the interpretation of different geological/geochemical proxies. While the effects of an incomplete geological record have been discussed and quantified in the field of paleontology for several decades, particularly with respect to the evolution and extinction of organisms (the Signor–Lipps effect), this is rarely quantified when considering geochemical records and may therefore lead to uncertainties for scientists studying the timing of atmospheric oxygenation. Geological evidence Evidence for the Great Oxidation Event is provided by a variety of petrological and geochemical markers that define this geological event. Continental indicators Paleosols, detrital grains, and red beds are evidence of low oxygen levels. Paleosols (fossil soils) older than 2.4 billion years old have low iron concentrations that suggest anoxic weathering. Detrital grains composed of pyrite, siderite, and uraninite (redox-sensitive detrital minerals) are found in sediments older than ca. 2.4 Ga. These minerals are only stable under low oxygen conditions, and so their occurrence as detrital minerals in fluvial and deltaic sediments are widely interpreted as evidence of an anoxic atmosphere. In contrast to redox-sensitive detrital minerals are red beds, red-colored sandstones that are coated with hematite. The occurrence of red beds indicates that there was sufficient oxygen to oxidize iron to its ferric state, and these represent a marked contrast to sandstones deposited under anoxic conditions which are often beige, white, grey, or green. Banded iron formation Banded iron formations are composed of thin alternating layers of chert (a fine-grained form of silica) and iron oxides (magnetite and hematite). Extensive deposits of this rock type are found around the world, almost all of which are more than 1.85 billion years old and most of which were deposited around 2.5 Ga. The iron in banded iron formations is partially oxidized, with roughly equal amounts of ferrous and ferric iron. Deposition of a banded iron formation requires both an anoxic deep ocean capable of transporting iron in soluble ferrous form, and an oxidized shallow ocean where the ferrous iron is oxidized to insoluble ferric iron and precipitates onto the ocean floor. The deposition of banded iron formations before 1.8 Ga suggests the ocean was in a persistent ferruginous state, but deposition was episodic and there may have been significant intervals of euxinia. The transition from deposition of banded iron formations to manganese oxides in some strata has been considered a key tipping point in the timing of the GOE because it is believed to indicate the escape of significant molecular oxygen into the atmosphere in the absence of ferrous iron as a reducing agent. Iron speciation Black laminated shales, rich in organic matter, are often regarded as a marker for anoxic conditions. However, the deposition of abundant organic matter is not a sure indication of anoxia, and burrowing organisms that destroy lamination had not yet evolved during the time frame of the Great Oxygenation Event. Thus laminated black shale by itself is a poor indicator of oxygen levels. Scientists must look instead for geochemical evidence of anoxic conditions. These include ferruginous anoxia, in which dissolved ferrous iron is abundant, and euxinia, in which hydrogen sulfide is present in the water. Examples of such indicators of anoxic conditions include the degree of pyritization (DOP), which is the ratio of iron present as pyrite to the total reactive iron. Reactive iron, in turn, is defined as iron found in oxides and oxyhydroxides, carbonates, and reduced sulfur minerals such as pyrites, in contrast with iron tightly bound in silicate minerals. A DOP near zero indicates oxidizing conditions, while a DOP near 1 indicates euxinic conditions. Values of 0.3 to 0.5 are transitional, suggesting anoxic bottom mud under an oxygenated ocean. Studies of the Black Sea, which is considered a modern model for ancient anoxic ocean basins, indicate that high DOP, a high ratio of reactive iron to total iron, and a high ratio of total iron to aluminum are all indicators of transport of iron into a euxinic environment. Ferruginous anoxic conditions can be distinguished from euxenic conditions by a DOP less than about 0.7. The currently available evidence suggests that the deep ocean remained anoxic and ferruginous as late as 580 Ma, well after the Great Oxygenation Event, remaining just short of euxenic during much of this interval of time. Deposition of banded iron formation ceased when conditions of local euxenia on continental platforms and shelves began precipitating iron out of upwelling ferruginous water as pyrite. Isotopes Some of the most persuasive evidence for the Great Oxidation Event is provided by the mass-independent fractionation (MIF) of sulfur. The chemical signature of the MIF of sulfur is found prior to 2.4–2.3 Ga but disappears thereafter. The presence of this signature all but eliminates the possibility of an oxygenated atmosphere. Different isotopes of a chemical element have slightly different atomic masses. Most of the differences in geochemistry between isotopes of the same element scale with this mass difference. These include small differences in molecular velocities and diffusion rates, which are described as mass-dependent fractionation processes. By contrast, MIF describes processes that are not proportional to the difference in mass between isotopes. The only such process likely to be significant in the geochemistry of sulfur is photodissociation. This is the process in which a molecule containing sulfur is broken up by solar ultraviolet (UV) radiation. The presence of a clear MIF signature for sulfur prior to 2.4 Ga shows that UV radiation was penetrating deep into the Earth's atmosphere. This in turn rules out an atmosphere containing more than traces of oxygen, which would have produced an ozone layer that would have shielded the lower atmosphere from UV radiation. The disappearance of the MIF signature for sulfur indicates the formation of such an ozone shield as oxygen began to accumulate in the atmosphere. MIF of sulphur also indicates the presence of oxygen in that oxygen is required to facilitate repeated redox cycling of sulphur. MIF provides clues to the Great Oxygenation Event. For example, oxidation of manganese in surface rocks by atmospheric oxygen leads to further reactions that oxidize chromium. The heavier Cr is oxidized preferentially over the lighter Cr, and the soluble oxidized chromium carried into the ocean shows this enhancement of the heavier isotope. The chromium isotope ratio in banded iron formation suggests small but significant quantities of oxygen in the atmosphere before the Great Oxidation Event, and a brief return to low oxygen abundance 500 Ma after the GOE. However, the chromium data may conflict with the sulfur isotope data, which calls the reliability of the chromium data into question. It is also possible that oxygen was present earlier only in localized "oxygen oases". Since chromium is not easily dissolved, its release from rocks requires the presence of a powerful acid such as sulfuric acid (H2SO4) which may have formed through bacterial oxidation of pyrite. This could provide some of the earliest evidence of oxygen-breathing life on land surfaces. Other elements whose MIF may provide clues to the GOE include carbon, nitrogen, transitional metals such as molybdenum and iron, and non-metal elements such as selenium. Fossils and biomarkers While the GOE is generally thought to be a result of oxygenic photosynthesis by ancestral cyanobacteria, the presence of cyanobacteria in the Archaean before the GOE is a highly controversial topic. Structures that are claimed to be fossils of cyanobacteria exist in rock formed 3.5 Ga. These include microfossils of supposedly cyanobacterial cells and macrofossils called stromatolites, which are interpreted as colonies of microbes, including cyanobacteria, with characteristic layered structures. Modern stromatolites, which can only be seen in harsh environments such as Shark Bay in Western Australia, are associated with cyanobacteria, and thus fossil stromatolites had long been interpreted as the evidence for cyanobacteria. However, it has increasingly been inferred that at least some of these Archaean fossils were generated abiotically or produced by non-cyanobacterial phototrophic bacteria. Additionally, Archaean sedimentary rocks were once found to contain biomarkers, also known as chemical fossils, interpreted as fossilized membrane lipids from cyanobacteria and eukaryotes. For example, traces of 2α-methylhopanes and steranes that are thought to be derived from cyanobacteria and eukaryotes, respectively, were found in the Pilbara of Western Australia. Steranes are diagenetic products of sterols, which are biosynthesized using molecular oxygen. Thus, steranes can additionally serve as an indicator of oxygen in the atmosphere. However, these biomarker samples have since been shown to have been contaminated, and so the results are no longer accepted. Carbonaceous microfossils from the Turee Creek Group of Western Australia, which date back to ~2.45–2.21 Ga, have been interpreted as iron-oxidising bacteria. Their presence suggests a minimum threshold of seawater oxygen content had been reached by this interval of time. Other indicators Some elements in marine sediments are sensitive to different levels of oxygen in the environment such as the transition metals molybdenum and rhenium. Non-metal elements such as selenium and iodine are also indicators of oxygen levels. Hypotheses The ability to generate oxygen via photosynthesis likely first appeared in the ancestors of cyanobacteria. These organisms evolved at least 2.45–2.32 Ga and probably as early as 2.7 Ga or earlier. However, oxygen remained scarce in the atmosphere until around 2.0 Ga, and banded iron formation continued to be deposited until around 1.85 Ga. Given the rapid multiplication rate of cyanobacteria under ideal conditions, an explanation is needed for the delay of at least 400 million years between the evolution of oxygen-producing photosynthesis and the appearance of significant oxygen in the atmosphere. Hypotheses to explain this gap must take into consideration the balance between oxygen sources and oxygen sinks. Oxygenic photosynthesis produces organic carbon that must be segregated from oxygen to allow oxygen accumulation in the surface environment, otherwise the oxygen back-reacts with the organic carbon and does not accumulate. The burial of organic carbon, sulfide, and minerals containing ferrous iron (Fe) is a primary factor in oxygen accumulation. When organic carbon is buried without being oxidized, the oxygen is left in the atmosphere. In total, the burial of organic carbon and pyrite today creates of O per year. This creates a net O flux from the global oxygen sources. The rate of change of oxygen can be calculated from the difference between global sources and sinks. The oxygen sinks include reduced gases and minerals from volcanoes, metamorphism and weathering. The GOE started after these oxygen-sink fluxes and reduced-gas fluxes were exceeded by the flux of O2 associated with the burial of reductants, such as organic carbon. About of O per year today goes to the sinks composed of reduced minerals and gases from volcanoes, metamorphism, percolating seawater and heat vents from the seafloor. On the other hand, of O per year today oxidizes reduced gases in the atmosphere through photochemical reaction. On the early Earth, there was visibly very little oxidative weathering of continents (e.g., a lack of red beds), and so the weathering sink on oxygen would have been negligible compared to that from reduced gases and dissolved iron in oceans. Dissolved iron in oceans exemplifies O2 sinks. Free oxygen produced during this time was chemically captured by dissolved iron, converting iron Fe and Fe2+ to magnetite () that is insoluble in water, and sank to the bottom of the shallow seas to create banded iron formations. It took 50 million years or longer to deplete the oxygen sinks. The rate of photosynthesis and associated rate of organic burial also affect the rate of oxygen accumulation. When land plants spread over the continents in the Devonian, more organic carbon was buried and likely allowed higher O2 levels to occur. Today, the average time that an O molecule spends in the air before it is consumed by geological sinks is about 2 million years. That residence time is relatively short in geologic time; so in the Phanerozoic, there must have been feedback processes that kept the atmospheric O level within bounds suitable for animal life. Evolution by stages Preston Cloud originally proposed that the first cyanobacteria had evolved the capacity to carry out oxygen-producing photosynthesis but had not yet evolved enzymes (such as superoxide dismutase) for living in an oxygenated environment. These cyanobacteria would have been protected from their own poisonous oxygen waste through its rapid removal via the high levels of reduced ferrous iron, Fe(II), in the early ocean. He suggested that the oxygen released by photosynthesis oxidized the Fe(II) to ferric iron, Fe(III), which precipitated out of the sea water to form banded iron formation. He interpreted the great peak in deposition of banded iron formation at the end of the Archean as the signature for the evolution of mechanisms for living with oxygen. This ended self-poisoning and produced a population explosion in the cyanobacteria that rapidly oxygenated the ocean and ended banded iron formation deposition. However, improved dating of Precambrian strata showed that the late Archean peak of deposition was spread out over tens of millions of years, rather than taking place in a very short interval of time following the evolution of oxygen-coping mechanisms. This made Cloud's hypothesis untenable. Most modern interpretations describe the GOE as a long, protracted process that took place over hundreds of millions of years rather than a single abrupt event, with the quantity of atmospheric oxygen fluctuating in relation to the capacity of oxygen sinks and the productivity of oxygenic photosynthesisers over the course of the GOE. More recently, families of bacteria have been discovered that closely resemble cyanobacteria but show no indication of ever having possessed photosynthetic capability. These may be descended from the earliest ancestors of cyanobacteria, which only later acquired photosynthetic ability by lateral gene transfer. Based on molecular clock data, the evolution of oxygen-producing photosynthesis may have occurred much later than previously thought, at around 2.5 Ga. This reduces the gap between the evolution of oxygen photosynthesis and the appearance of significant atmospheric oxygen. Nutrient famines Another possibility is that early cyanobacteria were starved for vital nutrients, and this checked their growth. However, a lack of the scarcest nutrients, iron, nitrogen, and phosphorus, could have slowed but not prevented a cyanobacteria population explosion and rapid oxygenation. The explanation for the delay in the oxygenation of the atmosphere following the evolution of oxygen-producing photosynthesis likely lies in the presence of various oxygen sinks on the young Earth. Nickel famine Early chemosynthetic organisms likely produced methane, an important trap for molecular oxygen, since methane readily oxidizes to carbon dioxide (CO2) and water in the presence of UV radiation. Modern methanogens require nickel as an enzyme cofactor. As the Earth's crust cooled and the supply of volcanic nickel dwindled, oxygen-producing algae began to outperform methane producers, and the oxygen percentage of the atmosphere steadily increased. From 2.7 to 2.4 Ga the rate of deposition of nickel declined steadily from a level 400 times that of today. This nickel famine was somewhat buffered by an uptick in sulfide weathering at the start of the GOE that brought some nickel to the oceans, without which methanogenic organisms would have declined in abundance more precipitously, plunging Earth into even more severe and long-lasting icehouse conditions than those seen during the Huronian glaciation. Large igneous provinces Another hypothesis posits that a number of large igneous provinces (LIPs) were emplaced during the GOE and fertilised the oceans with limiting nutrients, facilitating and sustaining cyanobacterial blooms. Increasing flux One hypothesis argues that the GOE was the immediate result of photosynthesis, although the majority of scientists suggest that a long-term increase of oxygen is more likely. Several model results show possibilities of long-term increase of carbon burial, but the conclusions are indeterminate. Decreasing sink In contrast to the increasing flux hypothesis, there are several hypotheses that attempt to use decrease of sinks to explain the GOE. One theory suggests increasing lacustrine organic carbon burial as a cause; with more reduced carbon being buried, there was less of it for free oxygen to react with in the atmosphere and oceans, enabling its buildup. A different theory suggests that the composition of the volatiles from volcanic gases was more oxidized. Another theory suggests that the decrease of metamorphic gases and serpentinization is the main key of GOE. Hydrogen and methane released from metamorphic processes are also lost from Earth's atmosphere over time and leave the crust oxidized. Scientists realized that hydrogen would escape into space through a process called methane photolysis, in which methane decomposes under the action of ultraviolet light in the upper atmosphere and releases its hydrogen. The escape of hydrogen from the Earth into space must have oxidized the Earth because the process of hydrogen loss is chemical oxidation. This process of hydrogen escape required the generation of methane by methanogens, so that methanogens actually helped create the conditions necessary for the oxidation of the atmosphere. Tectonic trigger One hypothesis suggests that the oxygen increase had to await tectonically driven changes in the Earth, including the appearance of shelf seas, where reduced organic carbon could reach the sediments and be buried. The burial of reduced carbon as graphite or diamond around subduction zones released molecular oxygen into the atmosphere. The appearance of oxidised magmas enriched in sulphur formed around subduction zones confirms changes in tectonic regime played an important role in the oxygenation of Earth's atmosphere. The newly produced oxygen was first consumed in various chemical reactions in the oceans, primarily with iron. Evidence is found in older rocks that contain massive banded iron formations apparently laid down as this iron and oxygen first combined; most present-day iron ore lies in these deposits. It was assumed oxygen released from cyanobacteria resulted in the chemical reactions that created rust, but it appears the iron formations were caused by anoxygenic phototrophic iron-oxidizing bacteria, which does not require oxygen. Evidence suggests oxygen levels spiked each time smaller land masses collided to form a super-continent. Tectonic pressure thrust up mountain chains, which eroded releasing nutrients into the ocean that fed photosynthetic cyanobacteria. Bistability Another hypothesis posits a model of the atmosphere that exhibits bistability: two steady states of oxygen concentration. The state of stable low oxygen concentration (0.02%) experiences a high rate of methane oxidation. If some event raises oxygen levels beyond a moderate threshold, the formation of an ozone layer shields UV rays and decreases methane oxidation, raising oxygen further to a stable state of 21% or more. The Great Oxygenation Event can then be understood as a transition from the lower to the upper steady states. Increasing photoperiod Cyanobacteria tend to consume nearly as much oxygen at night as they produce during the day. However, experiments demonstrate that cyanobacterial mats produce a greater excess of oxygen with longer photoperiods. The rotational period of the Earth was only about six hours shortly after its formation 4.5 Ga but increased to 21 hours by 2.4 Ga in the Paleoproterozoic. The rotational period increased again, starting 700 million years ago, to its present value of 24 hours. The total amount of oxygen produced by the cyanobacteria remained the same with longer days, but the longer the day, the more time oxygen has to diffuse into the water. Consequences of oxygenation Eventually, oxygen started to accumulate in the atmosphere, with two major consequences. Oxygen likely oxidized atmospheric methane (a strong greenhouse gas) to carbon dioxide (a weaker one) and water. This weakened the greenhouse effect of the Earth's atmosphere, causing planetary cooling, which has been proposed to have triggered a series of ice ages known as the Huronian glaciation, bracketing an age range of 2.45–2.22 Ga. The increased oxygen concentrations provided a new opportunity for biological diversification, as well as tremendous changes in the nature of chemical interactions between rocks, sand, clay, and other geological substrates and the Earth's air, oceans, and other surface waters. Despite the natural recycling of organic matter, life had remained energetically limited until the widespread availability of oxygen. The availability of oxygen greatly increased the free energy available to living organisms, with global environmental impacts. For example, mitochondria evolved after the GOE, giving organisms the energy to exploit new, more complex morphologies interacting in increasingly complex ecosystems, although these did not appear until the late Proterozoic and Cambrian. Mineral diversification The Great Oxygenation Event triggered an explosive growth in the diversity of minerals, with many elements occurring in one or more oxidized forms near the Earth's surface. It is estimated that the GOE was directly responsible for deposition of more than 2,500 of the total of about 4,500 minerals found on Earth today. Most of these new minerals were formed as hydrated and oxidized forms due to dynamic mantle and crust processes. Cyanobacteria evolution In field studies done in Lake Fryxell, Antarctica, scientists found that mats of oxygen-producing cyanobacteria produced a thin layer, one to two millimeters thick, of oxygenated water in an otherwise anoxic environment, even under thick ice. By inference, these organisms could have adapted to oxygen even before oxygen accumulated in the atmosphere. The evolution of such oxygen-dependent organisms eventually established an equilibrium in the availability of oxygen, which became a major constituent of the atmosphere. Origin of eukaryotes It has been proposed that a local rise in oxygen levels due to cyanobacterial photosynthesis in ancient microenvironments was highly toxic to the surrounding biota and that this selective pressure drove the evolutionary transformation of an archaeal lineage into the first eukaryotes. Oxidative stress involving production of reactive oxygen species (ROS) might have acted in synergy with other environmental stresses (such as ultraviolet radiation and desiccation) to drive selection in an early archaeal lineage towards eukaryosis. This archaeal ancestor may already have had DNA repair mechanisms based on DNA pairing and recombination, and possibly some cell fusion mechanism. The detrimental effects of internal ROS (produced by endosymbiont proto-mitochondria) on the archaeal genome could have promoted the evolution of meiotic sex from these humble beginnings. Selective pressure for efficient DNA repair of oxidative DNA damage may have driven the evolution of eukaryotic sex involving such features as cell-cell fusions, cytoskeleton-mediated chromosome movements, and the emergence of the nuclear membrane. Thus, the evolution of eukaryotic sex and eukaryogenesis were likely inseparable processes that largely evolved to facilitate DNA repair. The evolution of mitochondria, which are well suited for oxygenated environments, may have occurred during the GOE. However, other authors express skepticism that the GOE resulted in widespread eukaryotic diversification due to the lack of robust evidence, concluding that the oxygenation of the oceans and atmosphere does not necessarily lead to increases in ecological and physiological diversity. Lomagundi-Jatuli event The rise in oxygen content was not linear: instead, there was a rise in oxygen content around 2.3 Ga, followed by a drop around 2.1 Ga. This rise in oxygen is called the Lomagundi-Jatuli event, Lomagundi event, or Lomagundi-Jatuli excursion (named for a district of Southern Rhodesia) and the time period has been termed Jatulian; it is currently considered to be part of the Rhyacian period. During the Lomagundi-Jatuli event, oxygen amounts in the atmosphere reached similar heights to modern levels, before returning to low levels during the following stage, which caused the deposition of black shales (rocks that contain large amounts of organic matter that would otherwise have been burned away by oxygen). This drop in oxygen levels is called the . Evidence for the event has been found globally in places such as Fennoscandia and the Wyoming Craton. Oceans seem to have stayed rich in oxygen for some time even after the event ended. It has been hypothesized that eukaryotes first evolved during the Lomagundi-Jatuli event.
Physical sciences
Geological history
null
3270043
https://en.wikipedia.org/wiki/Electric%20power
Electric power
Electric power is the rate of transfer of electrical energy within a circuit. Its SI unit is the watt, the general unit of power, defined as one joule per second. Standard prefixes apply to watts as with other SI units: thousands, millions and billions of watts are called kilowatts, megawatts and gigawatts respectively. In common parlance, electric power is the production and delivery of electrical energy, an essential public utility in much of the world. Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. It is usually supplied to businesses and homes (as domestic mains electricity) by the electric power industry through an electrical grid. Electric power can be delivered over long distances by transmission lines and used for applications such as motion, light or heat with high efficiency. Definition Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts". The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is: where: W is work in joules t is time in seconds Q is electric charge in coulombs V is electric potential or voltage in volts I is electric current in amperes I.e., watts = volts times amps. Explanation Electric power is transformed to other forms of energy when electric charges move through an electric potential difference (voltage), which occurs in electrical components in electric circuits. From the standpoint of electric power, components in an electric circuit can be divided into two categories: Active devices (power sources) If electric current is forced to flow through the device in the direction from the lower electric potential to the higher, so positive charges move from the negative to the positive terminal, work will be done on the charges, and energy is being converted to electric potential energy from some other type of energy, such as mechanical energy or chemical energy. Devices in which this occurs are called active devices or power sources; such as electric generators and batteries. Some devices can be either a source or a load, depending on the voltage and current through them. For example, a rechargeable battery acts as a source when it provides power to a circuit, but as a load when it is connected to a battery charger and is being recharged. Passive devices (loads) If conventional current flows through the device in a direction from higher potential (voltage) to lower potential, so positive charge moves from the positive (+) terminal to the negative (−) terminal, work is done by the charges on the device. The potential energy of the charges due to the voltage between the terminals is converted to kinetic energy in the device. These devices are called passive components or loads; they 'consume' electric power from the circuit, converting it to other forms of energy such as mechanical work, heat, light, etc. Examples are electrical appliances, such as light bulbs, electric motors, and electric heaters. In alternating current (AC) circuits the direction of the voltage periodically reverses, but the current always flows from the higher potential to the lower potential side. Passive sign convention Since electric power can flow either into or out of a component, a convention is needed for which direction represents positive power flow. Electric power flowing out of a circuit into a component is arbitrarily defined to have a positive sign, while power flowing into a circuit from a component is defined to have a negative sign. Thus passive components have positive power consumption, while power sources have negative power consumption. This is called the passive sign convention. Resistive circuits In the case of resistive (Ohmic, or linear) loads, the power formula (P = I·V) and Joule's first law (P = I^2·R) can be combined with Ohm's law (V = I·R) to produce alternative expressions for the amount of power that is dissipated: where R is the electrical resistance. Alternating current without harmonics In alternating current circuits, energy storage elements such as inductance and capacitance may result in periodic reversals of the direction of energy flow. The portion of energy flow (power) that, averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as real power (also referred to as active power). The amplitude of that portion of energy flow (power) that results in no net transfer of energy but instead oscillates between the source and load in each cycle due to stored energy, is known as the absolute value of reactive power. The product of the RMS value of the voltage wave and the RMS value of the current wave is known as apparent power. The real power P in watts consumed by a device is given by where Vp is the peak voltage in volts Ip is the peak current in amperes Vrms is the root-mean-square voltage in volts Irms is the root-mean-square current in amperes θ = θv − θi is the phase angle by which the voltage sine wave leads the current sine wave, or equivalently the phase angle by which the current sine wave lags the voltage sine wave The relationship between real power, reactive power and apparent power can be expressed by representing the quantities as vectors. Real power is represented as a horizontal vector and reactive power is represented as a vertical vector. The apparent power vector is the hypotenuse of a right triangle formed by connecting the real and reactive power vectors. This representation is often called the power triangle. Using the Pythagorean Theorem, the relationship among real, reactive and apparent power is: Real and reactive powers can also be calculated directly from the apparent power, when the current and voltage are both sinusoids with a known phase angle θ between them: The ratio of real power to apparent power is called power factor and is a number always between −1 and 1. Where the currents and voltages have non-sinusoidal forms, power factor is generalized to include the effects of distortion. Electromagnetic fields Electrical energy flows wherever electric and magnetic fields exist together and fluctuate in the same place. The simplest example of this is in electrical circuits, as the preceding section showed. In the general case, however, the simple equation P = IV may be replaced by a more complex calculation. The closed surface integral of the cross-product of the electric field intensity and magnetic field intensity vectors gives the total instantaneous power (in watts) out of the volume: The result is a scalar since it is the surface integral of the Poynting vector. Production Generation The fundamental principles of much electricity generation were discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electric current is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet. For electric utilities, it is the first process in the delivery of electricity to consumers. The other processes, electricity transmission, distribution, and electrical energy storage and recovery using pumped-storage methods are normally carried out by the electric power industry. Electricity is mostly generated at a power station by electromechanical generators, driven by heat engines heated by combustion, geothermal power or nuclear fission. Other generators are driven by the kinetic energy of flowing water and wind. There are many other technologies that are used to generate electricity such as photovoltaic solar panels. A battery is a device consisting of one or more electrochemical cells that convert stored chemical energy into electrical energy. Since the invention of the first battery (or "voltaic pile") in 1800 by Alessandro Volta and especially since the technically improved Daniell cell in 1836, batteries have become a common power source for many household and industrial applications. According to a 2005 estimate, the worldwide battery industry generates US$48 billion in sales each year, with 6% annual growth. There are two types of batteries: primary batteries (disposable batteries), which are designed to be used once and discarded, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Batteries are available in many sizes; from miniature button cells used to power hearing aids and wristwatches to battery banks the size of rooms that provide standby power for telephone exchanges and computer data centers. Electric power industry The electric power industry provides the production and delivery of power, in sufficient quantities to areas that need electricity, through a grid connection. The grid distributes electrical energy to customers. Electric power is generated by central power stations or by distributed generation. The electric power industry has gradually been trending towards deregulation – with emerging players offering consumers competition to the traditional public utility companies. Uses Electric power, produced from central generating stations and distributed over an electrical transmission grid, is widely used in industrial, commercial, and consumer applications. A country's per capita electric power consumption correlates with its industrial development. Electric motors power manufacturing machinery and propel subways and railway trains. Electric lighting is the most important form of artificial light. Electrical energy is used directly in processes such as extraction of aluminum from its ores and in production of steel in electric arc furnaces. Reliable electric power is essential to telecommunications and broadcasting. Electric power is used to provide air conditioning in hot climates, and in some places, electric power is an economically competitive energy source for building space heating. The use of electric power for pumping water ranges from individual household wells to irrigation and energy storage projects.
Physical sciences
Electrodynamics
Physics
400199
https://en.wikipedia.org/wiki/Weight%20loss
Weight loss
Weight loss, in the context of medicine, health, or physical fitness, refers to a reduction of the total body mass, by a mean loss of fluid, body fat (adipose tissue), or lean mass (namely bone mineral deposits, muscle, tendon, and other connective tissue). Weight loss can either occur unintentionally because of malnourishment or an underlying disease, or from a conscious effort to improve an actual or perceived overweight or obese state. "Unexplained" weight loss that is not caused by reduction in calorific intake or increase in exercise is called cachexia and may be a symptom of a serious medical condition. Intentional Intentional weight loss is the loss of total body mass as a result of efforts to improve fitness and health, or to change appearance through slimming. Weight loss is the main treatment for obesity, and there is substantial evidence this can prevent progression from prediabetes to type 2 diabetes with a 7–10% weight loss and manage cardiometabolic health for diabetic people with a 5–15% weight loss. Weight loss in individuals who are overweight or obese can reduce health risks, increase fitness, and may delay the onset of diabetes. It could reduce pain and increase movement in people with osteoarthritis of the knee. Weight loss can lead to a reduction in hypertension (high blood pressure), however whether this reduces hypertension-related harm is unclear. Weight loss is achieved by adopting a lifestyle in which fewer calories are consumed than are expended. Depression, stress or boredom may contribute to unwanted weight gain or loss depending on the individual, and in these cases, individuals are advised to seek medical help. A 2010 study found that dieters who got a full night's sleep lost more than twice as much fat as sleep-deprived dieters. Though hypothesized that supplementation of vitamin D may help, studies do not support this. The majority of dieters regain weight over the long term. According to the UK National Health Service and the Dietary Guidelines for Americans, those who achieve and manage a healthy weight do so most successfully by being careful to consume just enough calories to meet their needs, and being physically active. For weight loss to be permanent, changes in diet and lifestyle must be permanent as well. There is evidence that counseling or exercise alone do not result in weight loss, whereas dieting alone results in meaningful long-term weight loss, and a combination of dieting and exercise provides the best results. Meal replacements, orlistat, a very-low-calorie diet, and primary care intensive medical interventions can also support meaningful weight loss. Techniques Diet and exercise The least intrusive weight loss methods, and those most often recommended, are adjustments to eating patterns and increased physical activity, generally in the form of exercise. The World Health Organization recommends that people combine a reduction of processed foods high in saturated fats, sugar and salt, and reduced caloric intake with an increase in physical activity. Both long-term exercise programs and anti-obesity medications reduce abdominal fat volume. Self-monitoring of diet, exercise, and weight are beneficial strategies for weight loss, particularly early in weight loss programs. Research indicates that those who log their foods about three times per day and about 20 times per month are more likely to achieve clinically significant weight loss. Permanent weight loss depends on maintaining a negative energy balance and not the type of macronutrients (such as carbohydrate) consumed. High protein diets have shown greater efficacy in the short term (under 12 months) for people eating ad libitum due to increased thermogenesis and satiety, however this effect tends to dissipate over time. Hydration Increasing water intake can reduce weight by increasing thermogenesis, by reducing food intake, and by increasing fat oxidation. Persons dieting for weight loss have demonstrated the weight-reducing effects of increased water consumption. Among adults in the U.S. there is a significant association between inadequate hydration and obesity. Medications Other methods of weight loss include use of anti-obesity drugs that decrease appetite, block fat absorption, or reduce stomach volume. Obesity has been resistant to drug-based therapies, with a 2021 review stating that existing medications are "often delivering insufficient efficacy and dubious safety". Semaglutide has also become popular recently as an aid in weight loss. It is particularly beneficial for those with type 2 diabetes and obesity. Bariatric surgery Bariatric surgery may be indicated in cases of severe obesity. Two common bariatric surgical procedures are gastric bypass and gastric banding. Both can be effective at limiting the intake of food energy by reducing the size of the stomach, but as with any surgical procedure both come with their own risks that should be considered in consultation with a physician. Weight loss industry There is a substantial market for products which claim to make weight loss easier, quicker, cheaper, more reliable, or less painful. These include books, DVDs, CDs, cremes, lotions, pills, rings and earrings, body wraps, body belts and other materials, fitness centers, clinics, personal coaches, weight loss groups, and food products and supplements. Dietary supplements, though widely used, are not considered a healthy option for weight loss, and have no clinical evidence of efficacy. Herbal products have not been shown to be effective. In 2008, between US$33 billion and $55 billion was spent annually in the US on weight-loss products and services, including medical procedures and pharmaceuticals, with weight-loss centers taking between 6 and 12 percent of total annual expenditure. Over $1.6 billion per year was spent on weight-loss supplements. About 70 percent of Americans' dieting attempts are of a self-help nature. In Western Europe, sales of weight-loss products, excluding prescription medications, topped €1,25 billion (£900 million/$1.4 billion) in 2009. The scientific soundness of commercial diets by commercial weight management organizations varies widely, being previously non-evidence-based, so there is only limited evidence supporting their use, because of high attrition rates. Commercial diets result in modest weight loss in the long term, with similar results regardless of the brand, and similarly to non-commercial diets and standard care. Comprehensive diet programs, providing counseling and targets for calorie intake, are more efficient than dieting without guidance ("self-help"), although the evidence is very limited. The National Institute for Health and Care Excellence devised a set of essential criteria to be met by commercial weight management organizations to be approved. Unintentional Characteristics Unintentional weight loss may result from loss of body fats, loss of body fluids, muscle atrophy, or a combination of these. It is generally regarded as a medical problem when at least 10% of a person's body weight has been lost in six months or 5% in the last month. Another criterion used for assessing weight that is too low is the body mass index (BMI). However, even lesser amounts of weight loss can be a cause for serious concern in a frail elderly person. Unintentional weight loss can occur because of an inadequately nutritious diet relative to a person's energy needs (generally called malnutrition). Disease processes, changes in metabolism, hormonal changes, medications or other treatments, disease- or treatment-related dietary changes, or reduced appetite associated with a disease or treatment can also cause unintentional weight loss. Poor nutrient utilization can lead to weight loss, and can be caused by fistulae in the gastrointestinal tract, diarrhea, drug-nutrient interaction, enzyme depletion and muscle atrophy. Continuing weight loss may deteriorate into wasting, a vaguely defined condition called cachexia. Cachexia differs from starvation in part because it involves a systemic inflammatory response. It is associated with poorer outcomes. In the advanced stages of progressive disease, metabolism can change so that they lose weight even when they are getting what is normally regarded as adequate nutrition and the body cannot compensate. This leads to a condition called anorexia cachexia syndrome (ACS) and additional nutrition or supplementation is unlikely to help. Symptoms of weight loss from ACS include severe weight loss from muscle rather than body fat, loss of appetite and feeling full after eating small amounts, nausea, anemia, weakness and fatigue. Serious weight loss may reduce quality of life, impair treatment effectiveness or recovery, worsen disease processes and be a risk factor for high mortality rates. Malnutrition can affect every function of the human body, from the cells to the most complex body functions, including: immune response; wound healing; muscle strength (including respiratory muscles); renal capacity and depletion leading to water and electrolyte disturbances; thermoregulation; and menstruation. Malnutrition can lead to vitamin and other deficiencies and to inactivity, which in turn may pre-dispose to other problems, such as pressure sores. Unintentional weight loss can be the characteristic leading to diagnosis of diseases such as cancer and type 1 diabetes. In the UK, up to 5% of the general population is underweight, but more than 10% of those with lung or gastrointestinal diseases and who have recently had surgery. According to data in the UK using the Malnutrition Universal Screening Tool ('MUST'), which incorporates unintentional weight loss, more than 10% of the population over the age of 65 is at risk of malnutrition. A high proportion (10–60%) of hospital patients are also at risk, along with a similar proportion in care homes. Causes Disease-related Disease-related malnutrition can be considered in four categories: Weight loss issues related to specific diseases include: As chronic obstructive pulmonary disease (COPD) advances, about 35% of patients experience severe weight loss called pulmonary cachexia, including diminished muscle mass. Around 25% experience moderate to severe weight loss, and most others have some weight loss. Greater weight loss is associated with poorer prognosis. Theories about contributing factors include appetite loss related to reduced activity, additional energy required for breathing, and the difficulty of eating with dyspnea (labored breathing). Cancer, a very common and sometimes fatal cause of unexplained (idiopathic) weight loss. About one-third of unintentional weight loss cases are secondary to malignancy. Cancers to suspect in patients with unexplained weight loss include gastrointestinal, prostate, hepatobiliary (hepatocellular carcinoma, pancreatic cancer), ovarian, hematologic or lung malignancies. People with HIV often experience weight loss, and it is associated with poorer outcomes. Wasting syndrome is an AIDS-defining condition. Gastrointestinal disorders are another common cause of unexplained weight loss – in fact they are the most common non-cancerous cause of idiopathic weight loss. Possible gastrointestinal etiologies of unexplained weight loss include: celiac disease, peptic ulcer disease, inflammatory bowel disease (crohn's disease and ulcerative colitis), pancreatitis, gastritis, diarrhea, chronic mesenteric ischemia and many other GI conditions. Infection. Some infectious diseases can cause weight loss. Fungal illnesses, endocarditis, many parasitic diseases, AIDS, Whipple's disease and some other subacute or occult infections may cause weight loss. Renal disease. Patients who have uremia often have poor or absent appetite, vomiting and nausea. This can cause weight loss. Cardiac disease. Cardiovascular disease, especially congestive heart failure, may cause unexplained weight loss. Connective tissue disease Oral, taste or dental problems (including infections) can reduce nutrient intake leading to weight loss. Therapy-related Medical treatment can directly or indirectly cause weight loss, impairing treatment effectiveness and recovery that can lead to further weight loss in a vicious cycle. Many patients will be in pain and have a loss of appetite after surgery. Part of the body's response to surgery is to direct energy to wound healing, which increases the body's overall energy requirements. Surgery affects nutritional status indirectly, particularly during the recovery period, as it can interfere with wound healing and other aspects of recovery. Surgery directly affects nutritional status if a procedure permanently alters the digestive system. Enteral nutrition (tube feeding) is often needed. However a policy of 'nil by mouth' for all gastrointestinal surgery has not been shown to benefit, with some weak evidence suggesting it might hinder recovery. Early post-operative nutrition is a part of Enhanced Recovery After Surgery protocols. These protocols also include carbohydrate loading in the 24 hours before surgery, but earlier nutritional interventions have not been shown to have a significant impact. Social conditions Social conditions such as poverty, social isolation and inability to get or prepare preferred foods can cause unintentional weight loss, and this may be particularly common in older people. Nutrient intake can also be affected by culture, family and belief systems. Ill-fitting dentures and other dental or oral health problems can also affect adequacy of nutrition. Loss of hope, status or social contact and spiritual distress can cause depression, which may be associated with reduced nutrition, as can fatigue. Myths Some popular beliefs attached to weight loss have been shown to either have less effect on weight loss than commonly believed or are actively unhealthy. According to Harvard Health, the idea of metabolic rate being the "key to weight" is "part truth and part myth" as while metabolism does affect weight loss, external forces such as diet and exercise have an equal effect. They also commented that the idea of changing one's rate of metabolism is under debate. Diet plans in fitness magazines are also often believed to be effective but may actually be harmful by limiting the daily intake of important calories and nutrients which can be detrimental depending on the person and are even capable of driving individuals away from weight loss. Health effects Obesity is a risk factor for certain conditions, including diabetes, cancer, cardiovascular disease, high blood pressure, and non-alcoholic fatty liver disease. Reduction of obesity lowers those risks. A loss of body weight has been associated with an approximate drop in blood pressure. Intentional weight loss is associated with cognitive performance improvements in overweight and obese individuals.
Biology and health sciences
Health and fitness
null
400339
https://en.wikipedia.org/wiki/Phenol%20formaldehyde%20resin
Phenol formaldehyde resin
Phenol formaldehyde resins (PF), also called phenolic resins or phenoplasts, are synthetic polymers obtained by the reaction of phenol or substituted phenol with formaldehyde. Used as the basis for Bakelite, PFs were the first commercial synthetic resins. They have been widely used for the production of molded products including billiard balls, laboratory countertops, and as coatings and adhesives. They were at one time the primary material used for the production of circuit boards but have been largely replaced with epoxy resins and fiberglass cloth, as with fire-resistant FR-4 circuit board materials. There are two main production methods. One reacts phenol and formaldehyde directly to produce a thermosetting network polymer, while the other restricts the formaldehyde to produce a prepolymer known as novolac which can be moulded and then cured with the addition of more formaldehyde and heat. There are many variations in both production and input materials that are used to produce a wide variety of resins for special purposes. Formation and structure Phenol-formaldehyde resins, as a group, are formed by a step-growth polymerization reaction that can be either acid- or base-catalysed. Since formaldehyde exists predominantly in solution as a dynamic equilibrium of methylene glycol oligomers, the concentration of the reactive form of formaldehyde depends on temperature and pH. Phenol reacts with formaldehyde at the ortho and para sites (sites 2, 4 and 6) allowing up to 3 units of formaldehyde to attach to the ring. The initial reaction in all cases involves the formation of a hydroxymethyl phenol: HOC6H5 + CH2O → HOC6H4CH2OH The hydroxymethyl group is capable of reacting with either another free ortho or para site, or with another hydroxymethyl group. The first reaction gives a methylene bridge, and the second forms an ether bridge: HOC6H4CH2OH + HOC6H5 → (HOC6H4)2CH2 + H2O 2 HOC6H4CH2OH → (HOC6H4CH2)2O + H2O The diphenol (HOC6H4)2CH2 (sometimes called a "dimer") is called bisphenol F, which is an important monomer in the production of epoxy resins. Bisphenol-F can further link generating tri- and tetra-and higher phenol oligomers. Novolaks Novolaks (or novolacs) are phenol-formaldehyde resins with a formaldehyde to phenol molar ratio of less than one. In place of phenol itself, they are often produced from cresols (methylphenols). The polymerization is brought to completion using acid-catalysis such as sulfuric acid, oxalic acid, hydrochloric acid and rarely, sulfonic acids. The phenolic units are mainly linked by methylene and/or ether groups. The molecular weights are in the low thousands, corresponding to about 10–20 phenol units. Obtained polymer is thermoplastic and require a curing agent or hardener to form a thermoset. Hexamethylenetetramine is a hardener added to crosslink novolac. At a temperature greater than 90 °C, it forms methylene and dimethylene amino bridges. Resoles can also be used as a curing agent (hardener) for novolac resins. In either case, the curing agent is a source of formaldehyde which provides bridges between novolac chains, eventually completely crosslinking the system. Novolacs have multiple uses as tire tackifier, high temperature resin, binder for carbon bonded refractories, carbon brakes, photoresists and as a curing agent for epoxy resins. Resoles Base-catalysed phenol-formaldehyde resins are made with a formaldehyde to phenol ratio of greater than one (usually around 1.5). These resins are called resoles. Phenol, formaldehyde, water and catalyst are mixed in the desired amount, depending on the resin to be formed, and are then heated. The first part of the reaction, at around 70 °C, forms a thick reddish-brown tacky material, which is rich in hydroxymethyl and benzylic ether groups. The rate of the base-catalysed reaction initially increases with pH, and reaches a maximum at about pH = 10. The reactive species is the phenoxide anion (C6H5O−) formed by deprotonation of phenol. The negative charge is delocalised over the aromatic ring, activating sites 2, 4 and 6, which then react with the formaldehyde. Being thermosets, hydroxymethyl phenols will crosslink on heating to around 120 °C to form methylene and methyl ether bridges through the elimination of water molecules. At this point the resin is a 3-dimensional network, which is typical of polymerised phenolic resins. The high crosslinking gives this type of phenolic resin its hardness, good thermal stability, and chemical imperviousness. Resoles are referred to as "one step" resins as they cure without a cross linker unlike novolacs, a "two step" resin. Resoles are major polymeric resin materials widely used for gluing and bonding building materials. Exterior plywood, oriented strand boards (OSB), engineered high-pressure laminate are typical applications. Crosslinking and the formaldehyde/phenol ratio When the molar ratio of formaldehyde:phenol reaches one, in theory every phenol is linked together via methylene bridges, generating one single molecule, and the system is entirely crosslinked. This is why novolacs (F:P <1) do not harden without the addition of a crosslinking agents, and why resoles with the formula F:P >1 will. Applications Phenolic resins are found in myriad industrial products. Phenolic laminates are made by impregnating one or more layers of a base material such as paper, fiberglass, or cotton with phenolic resin and laminating the resin-saturated base material under heat and pressure. The resin fully polymerizes (cures) during this process forming the thermoset polymer matrix. The base material choice depends on the intended application of the finished product. Paper phenolics are used in manufacturing electrical components such as punch-through boards, in household laminates, and in paper composite panels. Glass phenolics are particularly well suited for use in the high speed bearing market. Phenolic micro-balloons are used for density control. The binding agent in normal (organic) brake pads, brake shoes, and clutch discs are phenolic resin. Synthetic resin bonded paper, made from phenolic resin and paper, is used to make countertops. Another use of phenolic resins is the making of duroplast, famously used in Trabant automobiles. Phenolic resins are also used for making exterior plywood commonly known as weather and boil proof (WBP) plywood because phenolic resins have no melting point but only a decomposing point in the temperature zone of and above. Phenolic resin is used as a binder in loudspeaker driver suspension components which are made of cloth. Higher end billiard balls are made from phenolic resins, as opposed to the polyesters used in less expensive sets. Sometimes people select fibre reinforced phenolic resin parts because their coefficient of thermal expansion closely matches that of the aluminium used for other parts of a system, as in early computer systems and Duramold. The Dutch painting forger Han van Meegeren mixed phenol formaldehyde with his oil paints before baking the finished canvas, in order to fake the drying out of the paint over the centuries. Atmospheric re-entry spacecraft use phenol formaldehyde resin as a key component in ablative heat shields (e.g. AVCOAT on the Apollo modules). As the heat shield skin temperature can reach 1000-2000 °C, the resin pyrolizes due to aerodynamic heating. This reaction absorbs significant thermal energy, insulating the deeper layers of the heat shield. The outgassing of pyrolisis reaction products and the removal of charred material by friction (ablation) also contribute to vehicle insulation, by mechanically carrying away the heat absorbed in those materials. Trade names Bakelite was originally made from phenolic resin and wood flour. Ebonol is a paper-filled phenolic resin designed as a replacement for ebony wood in stringed and woodwind instruments. Novotext is cotton fibre-reinforced phenolic, using randomly oriented fibres. Tufnol is a laminated plastic available as sheet and rods, which is made from layers of paper or cloth which have been soaked with phenolic resin and pressed under heat. Its high resistance to oils and solvents have made it suitable for many engineering applications. Oasis Floral Foam is "an open-celled phenolic foam that readily absorbs water and is used as a base for flower arrangements." Paxolin is a resin bonded paper product long used as a base material for printed circuit boards, although it is being replaced by fiberglass composites in many applications. Richlite is a paper-filled phenolic resin with many uses, from tabletops and cutting-boards to guitar fingerboards. Biodegradation Phenol-formaldehyde is degraded by the white rot fungus Phanerochaete chrysosporium.
Physical sciences
Polymers
Chemistry
400342
https://en.wikipedia.org/wiki/Asiatic%20salamander
Asiatic salamander
The Asiatic salamanders (family Hynobiidae) are primitive salamanders found all over Asia, and in European Russia. They are closely related to the giant salamanders (family Cryptobranchidae), with which they form the suborder Cryptobranchoidea. About half of hynobiids currently described are endemic to Japan, but their range also covers parts of China, Russia, Afghanistan and Iran. Hynobiid salamanders practice external fertilization, or spawning. And, unlike other salamander families which reproduce internally, male hynobiids focus on egg sacs rather than females during breeding. The female lays two egg sacs at a time, each containing up to 70 eggs. Parental care is common. A few species have very reduced lungs, or no lungs at all. Larvae can sometimes have reduced external gills if they live in cold and very oxygen-rich water. Fossils of hynobiids are known from the Miocene to the present in Asia and Eastern Europe, though fossils of Cryptobranchoids more closely related to hynobiids than to giant salamanders extend back to the Middle Jurassic. Phylogeny Cladograms based on the work of Pyron and Wiens (2011) and modified using Mikko Haaramo Classification Currently, 81 species are known. These genera make up the Hynobiidae: Subfamily Hynobiinae Genus Afghanodon Afghanodon mustersi (Smith, 1940) Genus Batrachuperus (Chinese stream salamanders) Batrachuperus karlschmidti Liu, 1950 Batrachuperus londongensis Liu and Tian, 1978 Batrachuperus pinchonii (David, 1872) Batrachuperus tibetanus Schmidt, 1925 Batrachuperus yenyuanensis Liu, 1950 Genus Hynobius - (Asian salamanders) Hynobius abei Sato, 1934 Hynobius abuensis Matsui, Okawa, Nishikawa, and Tominaga, 2019 Hynobius akiensis Matsui, Okawa, and Nishikawa, 2019 Hynobius amakusaensis Nishikawa and Matsui, 2014 Hynobius amjiensis Gu, 1992 Hynobius arisanensis Maki, 1922 Hynobius bakan Matsui, Okawa, and Nishikawa, 2019 Hynobius boulengeri (Thompson, 1912) Hynobius chinensis Günther, 1889 Hynobius dunni Tago, 1931 Hynobius formosanus Maki, 1922 Hynobius fossigenus Okamiya, Sugawara, Nagano, and Poyarkov, 2018 Hynobius fucus Lai and Lue, 2008 Hynobius glacialis Lai and Lue, 2008 Hynobius guabangshanensis Shen, 2004 Hynobius guttatus Tominaga, Matsui, Tanabe, and Nishikawa, 2019 Hynobius hidamontanus Matsui, 1987 Hynobius hirosei Lantz, 1931 Hynobius ikioi Matsui, Nishikawa, and Tominaga, 2017 Hynobius iwami Matsui, Okawa, Nishikawa, and Tominaga, 2019 Hynobius katoi Matsui, Kokuryo, Misawa, and Nishikawa, 2004 Hynobius kimurae Dunn, 1923 Hynobius kuishiensis Tominaga, Matsui, Tanabe, and Nishikawa, 2019 Hynobius leechii Boulenger, 1887 Hynobius lichenatus Boulenger, 1883 Hynobius maoershanensis Zhou, Jiang, and Jiang, 2006 Hynobius mikawaensis Matsui, Misawa, Nishikawa, and Shimada, 2017 Hynobius naevius (Temminck and Schlegel, 1838) Hynobius nebulosus (Temminck and Schlegel, 1838) Hynobius nigrescens Stejneger, 1907 Hynobius okiensis Sato, 1940 Hynobius osumiensis Nishikawa and Matsui, 2014 Hynobius oyamai Tominaga, Matsui, and Nishikawa, 2019 Hynobius quelpaertensis Mori, 1928 Hynobius retardatus Dunn, 1923 Hynobius sematonotos Tominaga, Matsui, and Nishikawa, 2019 Hynobius setoi Matsui, Tanabe, and Misawa, 2019 Hynobius setouchi Matsui, Okawa, Tanabe, and Misawa, 2019 Hynobius shinichisatoi Nishikawa and Matsui, 2014 Hynobius sonani (Maki, 1922) Hynobius stejnegeri Dunn, 1923 Hynobius takedai Matsui and Miyazaki, 1984 Hynobius tokyoensis Tago, 1931 Hynobius tosashimizuensis Sugawara, Watabe, Yoshikawa, and Nagano, 2018 Hynobius tsuensis Abé, 1922 Hynobius tsurugiensis Tominaga, Matsui, Tanabe, and Nishikawa, 2019 Hynobius turkestanicus Nikolskii, 1910 Hynobius unisacculus Min, Baek, Song, Chang, and Poyarkov, 2016 Hynobius utsunomiyaorum Matsui and Okawa, 2019 Hynobius vandenburghi Dunn, 1923 Hynobius yangi Kim, Min, and Matsui, 2003 Hynobius yiwuensis Cai, 1985 Genus Liua (Wushan salamanders) Liua shihi (Liu, 1950) Liua tsinpaensis (Liu and Hu, 1966) Genus Pachyhynobius (stout salamanders) Pachyhynobius shangchengensis Fei, Qu, and Wu, 1983 Genus Paradactylodon (Middle Eastern stream salamanders) Paradactylodon persicus (Eiselt and Steiner, 1970) Genus Pseudohynobius Pseudohynobius flavomaculatus (Hu and Fei, 1978) Pseudohynobius guizhouensis Li, Tian, and Gu, 2010 Pseudohynobius jinfo Wei, Xiong, and Zeng, 2009 Pseudohynobius kuankuoshuiensis Xu and Zeng, 2007 Pseudohynobius puxiongensis (Fei and Ye, 2000) Pseudohynobius shuichengensis Tian, Gu, Li, Sun, and Li, 1998 Genus Ranodon (Semirichensk salamanders) Ranodon sibiricus Kessler, 1866 Genus Salamandrella (Siberian salamanders) Salamandrella keyserlingii Dybowski, 1870 Salamandrella tridactyla Nikolskii, 1905 Subfamily Onychodactylinae Genus Onychodactylus (clawed salamanders) Onychodactylus fischeri (Boulenger, 1886) Onychodactylus fuscus Yoshikawa and Matsui, 2014 Onychodactylus intermedius Nishikawa and Matsui, 2014 Onychodactylus japonicus (Houttuyn, 1782) Onychodactylus kinneburi Yoshikawa, Matsui, Tanabe, and Okayama, 2013 Onychodactylus koreanus Min, Poyarkov, and Vieites, 2012 Onychodactylus nipponoborealis Kuro-o, Poyarkov, and Vieites, 2012 Onychodactylus tsukubaensis Yoshikawa and Matsui, 2013 Onychodactylus zhangyapingi Che, Poyarkov, and Yan, 2012 Onychodactylus zhaoermii Che, Poyarkov, and Yan, 2012 Onychodactylus sillanus Min, Borzée, and Poyarkov, 2022 Onychodactylus pyrrhonotus Yoshikawa et Matsui, 2022
Biology and health sciences
Salamanders and newts
Animals
400378
https://en.wikipedia.org/wiki/Bat-eared%20fox
Bat-eared fox
The bat-eared fox (Otocyon megalotis) is a species of fox found on the African savanna. It is the only extant species of the genus Otocyon and an ancient (basal) canid species. Fossil records indicate this canid first appeared during the middle Pleistocene. There are two separate populations of the bat-eared fox, each of which makes up a subspecies. The bat referred to in its colloquial name is possibly the Egyptian slit-faced bat (Nycteris thebaica), which is abundant in the region and has very large ears. Other vernacular names include big-eared fox, black-eared fox, long-eared fox, Delalande's fox, cape fox, and motlosi. It is named for its large ears, which have a role in thermoregulation. It is a small canid, being of comparable size to the closely related cape fox and common raccoon dog. Its fur varies in color depending on the subspecies, but is generally tan-colored and has guard hairs of a grey agouti color. The bat-eared fox is found in Southern and East Africa, though the two subspecies are separated by an unpopulated region spanning approximately . In its range, the bat-eared fox digs dens for shelter and to raise its young, and lives in social groups or pairs that hunt and groom together. The bat-eared fox eats mainly insects—a diet unique among canids. It forages in arid and semi-arid environments, preferring regions with bare ground and where ungulates keep grasses short, and locates prey by using its hearing, walking slowly with its nose to the ground and ears tilted forwards. Most of its diet is made up of harvester termites, which also hydrates the bat-eared fox, as it does not drink from free-standing water. By feeding on harvester termites, it acts as a means of population control for these insects, which are considered pests in regions populated by humans. In such regions, it has been hunted for its fur. No major threats to the bat-eared fox exist, and as such it is considered to be a least-concern species. Etymology The bat-eared fox's generic name Otocyon is derived from the Greek words otus () for ear and cyon () for dog, while the specific name megalotis comes from the Greek words megas () for large and otus () for ear. The common name for the bat-eared fox is likely taken from the Egyptian slit-faced bat (Nycteris thebaica), due to the bat's similarly large ears and abundance in the bat-eared fox's geographic range. Other vernacular names for the bat-eared fox include big-eared fox, black-eared fox, long-eared fox, Delalande's fox, cape fox, and motlosi. Taxonomy The bat-eared fox is the only living species of the genus Otocyon. Its scientific name, given by Anselme Gaëtan Desmarest, was initially Canis megalotis (due to its close resemblance to jackals), and later changed by Salomon Müller which placed it in its own genus, Otocyon; its large ears and different dental formula warrant inclusion in a genus distinct from both Canis and true foxes (Vulpes). Due to its different dentition, the bat-eared fox was previously placed in a distinct subfamily of canids, Otocyoninae, as no relationship to any living species of canid could be established. However, this species is regarded as having affinities with the vulpine line, and Otocyon was placed with high confidence as sister to the clade containing both the raccoon dog (Nyctereutes) and true foxes (Vulpes), occupying a basal (closest to the base) position within Canidae. The following cladogram is based on figures by Lindblad-Toh et al., 2005: Subspecies Currently, there are two recognized subspecies: Fossils Otocyon is poorly represented in the fossil record. It is suggested the genus forms a clade with Prototocyon, an extinct genus of canid. In the Olduvai Gorge, Tanzania, fossils of the related extinct fox species first considered Otocyon recki have been found that date back to the late Pliocene or early Pleistocene. O. recki is now often placed in Prototocyon; fossil records specifically of Otocyon megalotis have been identified in sediments only as old as the middle Pleistocene. Characteristics Bat-eared foxes range in weight from . Their head and body length is , tail length is , shoulder height is , and the notably large ears are long. Generally, the pelage is tan-colored, with gray guard hairs of an agouti coloration. The undersides and throat are pale. The limbs are dark, shading to dark brown or black at their extremities. The muzzle, the tip and upperside of the tail and the facial mask are black. The insides of the ears are white. Individuals of the East African subspecies, O. m. virgatus, tend toward a buff pelage with dark brown markings, as opposed to the black of O. m. megalotis. The proportionally large ears of bat-eared foxes, a characteristic shared by many other inhabitants of hot, arid climates, help to distribute heat. They also help in locating prey. Dentition and jaw adaptations The teeth of the bat-eared fox are much smaller and reduced in shearing surface formation than teeth of other canid species, excepting the bush dog (Spetothos venaticus) and dhole (Cuon alpinus). This is an adaptation to its insectivorous diet. The teeth are not the bat-eared fox's only morphological adaptation for its diet. On the lower jaw, a step-like protrusion is present called the subangular process, which is present in only a few canid species and both increases the bite force of the masseter muscle and anchors the large digastric muscle to allow for rapid chewing. The digastric muscle is also modified to allow for opening and closing the jaw five times per second. Distribution and habitat The bat-eared fox has a disjunct distribution across the arid and semi-arid regions of Eastern and Southern Africa, in two allopatric populations (representing each of the recognized subspecies) separated by approximately . Subspecies O. m. virgatus extends from southern Sudan, Ethiopia and Somalia, through Uganda and Kenya to southwestern Tanzania; O. m. megalotis occurs in the southern part of Africa, ranging from Angola through Namibia and Botswana to South Africa, and extends as far east as Mozambique and Zimbabwe, spreading into the Cape Peninsula and toward Cape Agulhas. Home ranges vary in size from . The two disjunct ranges of O. megalotis were likely connected to each other during the Pleistocene epoch. Bat-eared foxes are adapted to arid or semi-arid environments. They are commonly found in short grasslands, as well as the more arid regions of the savannas, along woodland edges, and in open acacia woodlands. They prefer bare ground and areas where grass is kept short by grazing ungulates and tend to hunt in these short grass and low shrub habitats. However, they do venture into areas with tall grasses and thick shrubs to hide when threatened. In addition to raising their young in dens, bat-eared foxes use self-dug dens for shelter from extreme temperatures and winds. They also lie under acacia trees in South Africa to seek shade during the day. Behavior and ecology Bat-eared foxes are highly social animals. They often live in pairs or groups, and home ranges of groups either overlap substantially or very little. In southern Africa, bat-eared foxes live in monogamous pairs with pups, while those in eastern Africa may live in pairs, or in stable family groups consisting of a male and up to three closely related females with pups. Individuals forage, play, and rest together in a group, which helps in protection against predators. They engage in frequent and extended allogrooming sessions, which serve to strengthen group cohesion, mostly between mature adults, but also between young adults and mature adults. Visual displays are very important in communication among bat-eared foxes. When they are looking intently at something, the head is held high, eyes are open, ears are erect and facing forward, and the mouth is closed. When an individual is in threat or showing submission, the ears are pulled back and lying against the head and the head is low. The tail also plays a role in communication. When an individual is asserting dominance or aggression, feeling threatened, playing, or being sexually aroused, the tail is arched in an inverted U shape. Individuals can also use piloerection, which occurs when individual hairs are standing straight, to make it appear larger when faced with extreme threat. When running, chasing, or fleeing, the tail is straight and horizontal. The bat-eared fox can recognize individuals up to away. The recognition process has three steps: First they ignore the individual, then they stare intently, and finally they either approach or attack without displays. When greeting another, the approaching individual shows symbolic submission which is received by the other individual with a high head and tail down. Few vocalizations are used for communication, but contact calls and warning calls are used, mostly during the winter. Glandular secretions and scratching, other than for digging, are absent in communication, although they appear to establish pair bonds by scent marking. In the more northern areas of its range (around Serengeti), they are nocturnal 85% of the time. However, around South Africa, they are nocturnal only in the summer and diurnal during the winter. Hunting and diet The bat-eared fox is the only truly insectivorous canid, with a marked preference for harvester termites (Hodotermes mossambicus), which can constitute 80–90% of its diet. When this particular species of termite is not available, their opportunistic diet allows a wide variety of food items to be taken: they can consume other species of termites, other arthropods such as ants, beetles (especially scarab beetles), crickets, grasshoppers, millipedes, moths, scorpions, spiders, and rarely birds, birds' eggs and chicks, small mammals, reptiles, and fungi (the desert truffle Kalaharituber pfeilii). Berries, seeds, and wild fruit also are consumed. The bat-eared fox refuses to feed on snouted harvester termites, likely because it is not adapted to tolerate the termites' chemical defense. Bat-eared foxes require water for lactation, but have not been observed drinking from free-standing water. They meet their water requirements through the high water content of their diet. Bat-eared foxes usually hunt in groups, often splitting up in pairs, with separated subgroups moving through the same general area. When termites are plentiful, feeding aggregations of up to 15 individuals from different families occur. Individuals forage alone after family groups break in June or July and during the months after pups birth. Prey is located primarily by auditory means, rather than by smell or sight. Foraging patterns vary between seasons and populations, and coincide with termite availability. In eastern Africa, nocturnal foraging is the rule, while in southern Africa, nocturnal foraging during summer slowly changes to an almost solely diurnal pattern during the winter. Foraging techniques depend on prey type, but food is often located by walking slowly, nose close to the ground and ears tilted forward. It usually occurs in patches, which match the clumped prey resources, such as termite colonies, that also occur in patches. Groups are able to forage on clumps of prey in patches because they do not fight each other for food due to their degree of sociality and lack of territoriality. As the bat-eared fox's range overlaps with that of the aardvark, it will take advantage of termite mounds opened up by the latter animal, as will aardwolves. Reproduction and life cycle The bat-eared fox is predominantly socially monogamous, although it has been observed in polygynous groups. In contrast to other canids, the bat-eared fox has a reversal in parental roles, with the male taking on the majority of the parental care behavior. Gestation lasts for 60–70 days and females give birth to litters consisting of one to six pups. Beyond lactation, which lasts 14 to 15 weeks, males take over grooming, defending, huddling, chaperoning, and carrying the young between den sites. Additionally, male care and den attendance rates have been shown to have a direct correlation with pup survival rates. The female forages for food, which she uses to maintain milk production, on which the pups heavily depend. Food foraged by the female is not brought back to the pups or regurgitated to feed the pups. Pups in the Kalahari region are born September–November and those in the Botswana region are born October–December. Young bat-eared foxes disperse and leave their family groups at 5–6 months old and reach sexual maturity at 8–9 months. Bat-eared foxes have been recorded reaching maximum lifespans of over 14 to 17 years in captivity, and up to 9 years in the wild. Threats No major threats to bat-eared fox populations exist, though hunting, disease and drought can threaten individuals and lower population numbers on a short term scale. Diseases that affect the bat-eared fox include canine distemper, canine parvovirus, and rabies. Conservation O. megalotis is considered to be a least-concern species by both the International Union for Conservation of Nature and the South African National Biodiversity Institute. Some parts of its range are incidentally protected areas. Human use and captivity The bat-eared fox has some commercial use for humans. They are important for harvester termite population control, as the termites are considered pests. They have also been hunted for their fur by Botswana natives. Captive bat-eared foxes are present in zoos in North America, South Africa, Europe, and Asia.
Biology and health sciences
Canines
Animals
400414
https://en.wikipedia.org/wiki/USB%20flash%20drive
USB flash drive
A flash drive (also thumb drive, memory stick, and pen drive/pendrive) is a data storage device that includes flash memory with an integrated USB interface. A typical USB drive is removable, rewritable, and smaller than an optical disc, and usually weighs less than . Since first offered for sale in late 2000, the storage capacities of USB drives range from 8 megabytes to 256 gigabytes (GB), 512 GB and 1 terabyte (TB). As of 2024, 4 TB flash drives were the largest currently in production. Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to physically last between 10 and 100 years under normal circumstances (shelf storage time). Common uses of USB flash drives are for storage, supplementary back-ups, and transferring of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are less vulnerable to electromagnetic interference than floppy disks, and are unharmed by surface scratches (unlike CDs). However, as with any flash storage, data loss from bit leaking due to prolonged lack of electrical power and the possibility of spontaneous controller failure due to poor manufacturing could make it unsuitable for long-term archiving of data. The ability to retain data is affected by the controller's firmware, internal data redundancy, and error correction algorithms. Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" 3.5-inch floppy disk. USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices, due to their standardized form factor, which allows the card to be housed inside a device without protruding. A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. Some are equipped with an I/O indication LED that lights up or blinks upon access. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist (e.g. micro-USB and USB-C ports). USB flash drives draw power from the computer via the USB connection. Some devices combine the functionality of a portable media player with USB flash storage; they require a battery only when used to play music on the go. History The basis for USB flash drives is flash memory, a type of floating-gate semiconductor memory invented by Fujio Masuoka in the early 1980s. Flash memory uses floating-gate MOSFET transistors as memory cells. Multiple individuals have staked a claim to having invented the USB flash drive. On April 5, 1999, Amir Ban, Dov Moran, and Oron Ogdan of M-Systems, an Israeli company, filed a patent application entitled "Architecture for a Universal Serial Bus-Based PC Flash Disk". The patent was subsequently granted on November 14, 2000 and these individuals have often been recognized as the inventors of the USB flash drive. Also in 1999, Shimon Shmueli, an engineer at IBM, submitted an invention disclosure asserting that he had invented the USB flash drive. A Singaporean company named Trek 2000 International is the first company known to have sold a USB flash drive, and has also maintained that it is the original inventor of the device. Finally Pua Khein-Seng, a Malaysian engineer, has also been recognized by some as a possible inventor of the device. Given these competing inventor claims, patent disputes involving the USB flash drive have arisen over the years. Both Trek 2000 International and Netac Technology have accused others of infringing their patents on the USB flash drive. However, the question of who was the first to invent the USB flash drive has multiple claims persist, the Natec Technology get the basic copyright of American in Dec 7, 2004. And in the lawsuit, the PNY company paid 1,000 million dollars to Natec. Technology improvements Flash drives are often measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second (MB/s), megabits per second (Mbit/s), or in optical drive multipliers such as "180X" (180 times 150 KiB/s). File transfer rates vary considerably among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, which was about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, which is limited to 12 Mbit/s (1.5 MB/s) with accounted overhead. The effective transfer rate of a device is significantly affected by the data access pattern. By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound; after accounting for the protocol overhead That same year, Intel sparked widespread use of second generation USB by including them within its laptops. By 2010, the maximum available storage capacity for the devices had reached upwards of 128 GB. USB 3.0 was slow to appear in laptops. Through 2010, the majority of laptop models still contained only USB 2.0. In January 2013, tech company Kingston, released a flash drive with 1 TB of storage. The first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015. By July 2016, flash drives with 8 to 256 GB capacity were sold more frequently than those with capacities between 512 GB and 1 TB. In 2017, Kingston Technology announced the release of a 2-TB flash drive. In 2018, SanDisk announced a 1 TB USB-C flash drive, the smallest of its kind. On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB or USB-C plug, facilitating data transfers between different devices. Technology On a USB flash drive, one end of the device is fitted with a single USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices. Inside the casing is a small printed circuit board, which has some power circuitry and a small number of surface-mounted integrated circuits (ICs). Typically, one of these ICs provides an interface between the USB connector and the onboard memory, while the other is the flash memory. Drives typically use the USB mass storage device class to communicate with the host. Flash memory Flash memory combines a number of older technologies, with lower cost, lower power consumption and small size made possible by advances in semiconductor device fabrication technology. The memory storage is based on earlier EPROM and EEPROM technologies. These had limited capacity, were slow for both reading and writing, required complex high-voltage drive circuitry, and could be re-written only after erasing the entire contents of the chip. Hardware designers later developed EEPROMs with the erasure region broken up into smaller "fields" that could be erased individually without affecting the others. Altering the contents of a particular memory location involved copying the entire field into an off-chip buffer memory, erasing the field, modifying the data as required in the buffer, and re-writing it into the same field. This required considerable computer support, and PC-based EEPROM flash memory systems often carried their own dedicated microprocessor system. Flash drives are more or less a miniaturized version of this. The development of high-speed serial data interfaces such as USB made semiconductor memory systems with serially accessed storage viable, and the simultaneous development of small, high-speed, low-power microprocessor systems allowed this to be incorporated into extremely compact systems. Serial access requires far fewer electrical connections for the memory chips than parallel access, simplifying the manufacture of multi-gigabyte drives. Computers access flash memory systems very much like hard disk drives, where the controller system has full control over where information is actually stored. The actual EEPROM writing and erasure processes are, however, still very similar to the earlier systems described above. Many low-cost MP3 players simply add extra software and a battery to a standard flash memory control microprocessor so it can also serve as a music playback decoder. Most of these players can also be used as a conventional flash drive, for storing files of any type. Essential components There are typically five parts to a flash drive: USB plug provides a physical interface to the host computer. Some USB flash drives use USB plug that does not protect the contacts, with the possibility of plugging it into the USB port in the wrong orientation, if the connector type is not symmetrical. USB mass storage controller a small microcontroller with a small amount of on-chip ROM and RAM. NAND flash memory chip(s) stores data (NAND flash is typically also used in digital cameras). Crystal oscillator produces the device's main clock signal and controls the device's data output through a phase-locked loop. Cover typically made of plastic or metal, protecting the electronics against mechanical stress and even possible short circuits. Additional components The typical device may also include: Jumpers and test pins – for testing during the flash drive's manufacturing or loading code into its microcontroller. LEDs – indicate data transfers or data reads and writes. Write-protect switches – Enable or disable writing of data into memory. Unpopulated space – provides space to include a second memory chip. Having this second space allows the manufacturer to use a single printed circuit board for more than one storage size device. USB connector cover or cap – reduces the risk of damage, prevents the entry of dirt or other contaminants, and improves overall device appearance. Some flash drives use retractable USB connectors instead. Others have a swivel arrangement so that the connector can be protected without removing anything. Transport aid – the cap or the body often contains a hole suitable for connection to a key chain or lanyard. Connecting the cap, rather than the body, can allow the drive itself to be lost. Some drives offer expandable storage via an internal memory card slot, much like a memory card reader. Size and style of packaging Most USB flash drives weigh less than . While some manufacturers are competing for the smallest size, with the biggest memory, offering drives only a few millimeters larger than the USB plug itself, some manufacturers differentiate their products by using elaborate housings, which are often bulky and make the drive difficult to connect to the USB port. Because the USB port connectors on a computer housing are often closely spaced, plugging a flash drive into a USB port may block an adjacent port. Such devices may carry the USB logo only if sold with a separate extension cable. Such cables are USB-compatible but do not conform to the USB standard. USB flash drives have been integrated into other commonly carried items, such as watches, pens, laser pointers, and even the Swiss Army Knife; others have been fitted with novelty cases such as toy cars or Lego bricks. USB flash drives with images of dragons, cats or aliens are very popular in Asia. The small size, robustness and cheapness of USB flash drives make them an increasingly popular peripheral for case modding. File system Most flash drives ship preformatted with the FAT32, or exFAT file systems. The ubiquity of the FAT32 file system allows the drive to be accessed on virtually any host device with USB support. Also, standard FAT maintenance utilities (e.g., ScanDisk) can be used to repair or retrieve corrupted data. However, because a flash drive appears as a USB-connected hard drive to the host system, the drive can be reformatted to any file system supported by the host operating system. Defragmenting Flash drives can be defragmented. There is a widespread opinion that defragmenting brings little advantage (as there is no mechanical head that moves from fragment to fragment), and that defragmenting shortens the life of the drive by making many unnecessary writes. However, some sources claim that defragmenting a flash drive can improve performance (mostly due to improved caching of the clustered data), and the additional wear on flash drives may not be significant. Even distribution Some file systems are designed to distribute usage over an entire memory device without concentrating usage on any part (e.g., for a directory) to prolong the life of simple flash memory devices. Some USB flash drives have this 'wear leveling' feature built into the software controller to prolong device life, while others do not, so it is not necessarily helpful to install one of these file systems. Hard disk drive Sectors are 512 bytes long, for compatibility with hard disk drives, and the first sector can contain a master boot record and a partition table. Therefore, USB flash units can be partitioned just like hard disk drives. Longevity The memory in flash drives was commonly engineered with multi-level cell (MLC) based memory that is good for around 3,000-5,000 program-erase cycles. Nowadays Triple-level Cell (TLC) is also often used, which has up to 500 write cycles per physical sector, while some high-end flash drives have single-level cell (SLC) based memory that is good for around 30,000 writes. There is virtually no limit to the number of reads from such flash memory, so a well-worn USB drive may be write-protected to help ensure the life of individual cells. Estimation of flash memory endurance is a challenging subject that depends on the SLC/MLC/TLC memory type, size of the flash memory chips, and actual usage pattern. As a result, a USB flash drive can last from a few days to several hundred years. Regardless of the endurance of the memory itself, the USB connector hardware is specified to withstand only around 1,500 insert-removal cycles. Counterfeit products Counterfeit USB flash drives are sometimes sold with claims of having higher capacities than they actually possess. These are typically low-capacity USB drives with modified flash memory controller firmware that emulates larger capacity drives (for example, a 2 GB drive being marketed as a 64 GB drive). When plugged into a computer, they report being the larger capacity they were sold as, but when data is written to them, either the write fails, the drive freezes up, or it overwrites existing data. Software tools exist to check and detect fake USB drives, and in some cases it is possible to repair these devices to remove the false capacity information and use its real storage limit. File transfer speeds Transfer speeds are technically determined by the slowest of three factors: the USB version used, the speed in which the USB controller device can read and write data onto the flash memory, and the speed of the hardware bus, especially in the case of add-on USB ports. USB flash drives usually specify their read and write speeds in megabytes per second (MB/s); read speed is usually faster. These speeds are for optimal conditions; real-world speeds are usually slower. In particular, circumstances that often lead to speeds much lower than advertised are transfer (particularly writing) of many small files rather than a few very large ones, and mixed reading and writing to the same device. In a typical well-conducted review of a number of high-performance USB 3.0 drives, a drive that could read large files at 68 MB/s and write at 46 MB/s, could only manage 14 MB/s and 0.3 MB/s with many small files. When combining streaming reads and writes the speed of another drive, the drive could read at 92 MB/s and write at 70 MB/s, was 8 MB/s. These differences differ radically from one drive to another; some could write small files 10% faster than for large ones. The examples given are chosen to illustrate extremes. Uses Personal data transport The most common use of flash drives is to transport and store personal files, such as documents, pictures and videos. Individuals also store medical information on flash drives for emergencies and disaster preparation. Secure storage of data, application and software files With wide deployment of flash drives in various environments (secured or otherwise), data and information security remain critical issues. Biometrics and encryption are becoming the norm as data security needs increase; on-the-fly encryption systems are particularly useful in this regard, as they can transparently encrypt large amounts of data. In some cases, a secure USB drive may use a hardware-based encryption mechanism that uses a hardware module instead of software for strongly encrypting data. IEEE 1667 is an attempt to create a generic authentication platform for USB drives. It is supported in Windows 7 and Windows Vista (Service Pack 2 with a hotfix). Computer forensics and law enforcement A recent development for the use of a USB Flash Drive as an application carrier is to carry the Computer Online Forensic Evidence Extractor (COFEE) application developed by Microsoft. COFEE is a set of applications designed to search for and extract digital evidence on computers confiscated from suspects. Forensic software is required not to alter in any way the information stored on the computer being examined. Other forensic suites run from CD-ROM or DVD-ROM, but cannot store data on the media they are run from (although they can write to other attached devices, such as external drives or memory sticks). Updating motherboard firmware Motherboard firmware (including BIOS and UEFI) can be updated using USB flash drives. Usually, new firmware is downloaded and placed onto a FAT16- or FAT32-formatted USB flash drive connected to a system which is to be updated, and the path to the new firmware image is selected within the update component of system's firmware. Some motherboard manufacturers also allow such updates without the need to enter the system's firmware update component, making it possible to easily recover systems with corrupted firmware. In addition, HP has introduced a USB floppy drive key, an ordinary USB flash drive with the capacity to emulate floppy drives, allowing it to be used for updating system firmware where direct use of USB flash drives is not supported. The desired mode of operation, regular USB mass storage device or floppy drive emulation, is selected via sliding a switch on the device's housing. Booting operating systems Most current PC firmware permits booting from a USB drive, allowing the launch of an operating system from a bootable flash drive. Such a configuration is known as a Live USB. Original flash memory designs had very limited estimated lifetimes. The failure mechanism for flash memory cells is analogous to a metal fatigue mode; the device fails by refusing to write new data to specific cells that have been subject to many read-write cycles over the device's lifetime. Premature failure of a "live USB" could be circumvented by using a flash drive with a write-lock switch as a WORM device, identical to a live CD. Originally, this potential failure mode limited the use of "live USB" system to special-purpose applications or temporary tasks, such as: Loading a minimal, hardened kernel for embedded applications (e.g., network router, firewall). Bootstrapping an operating system install or disk cloning operation, often across a network. Maintenance tasks, such as virus scanning or low-level data repair, without the primary host operating system loaded. , newer flash memory designs have much higher estimated lifetimes. Several manufacturers are now offering warranties of 5 years or more. Such warranties should make the device more attractive for more applications. By reducing the probability of the device's premature failure, flash memory devices can now be considered for use where a magnetic disk would normally have been required. Flash drives have also experienced an exponential growth in their storage capacity over time (following the Moore's Law growth curve). As of 2013, single-packaged devices with capacities of 1 TB are readily available, and devices with 16 GB capacity are very economical. Storage capacities in this range have traditionally been considered to offer adequate space, because they allow enough space for both the operating system software and some free space for the user's data. Operating system installation media Installers of some operating systems can be stored to a flash drive instead of a CD or DVD, including various Linux distributions, Windows 7 and newer versions, and macOS. In particular, Mac OS X 10.7 is distributed only online, through the Mac App Store, or on flash drives; for a MacBook Air with Boot Camp and no external optical drive, a flash drive can be used to run installation of Windows or Linux from USB, a process that can be automated via the use of tools like the Universal USB Installer or Rufus. However, for installation of Windows 7 and later versions, using USB flash drive with hard disk drive emulation as detected in PC's firmware is recommended in order to boot from it. Transcend is the only manufacturer of USB flash drives containing such a feature. Furthermore, for installation of Windows XP, using a USB flash drive with a storage limit of at most 2 GB is recommended in order to boot from it. Windows ReadyBoost In Windows Vista and later versions, ReadyBoost feature allows flash drives (from 4 GB in case of Windows Vista) to augment operating system memory. Application carriers Flash drives are used to carry applications that run on the host computer without requiring installation. While any standalone application can in principle be used this way, many programs store data, configuration information, etc. on the hard drive and registry of the host computer. The U3 company works with drive makers (parent company SanDisk as well as others) to deliver custom versions of applications designed for Microsoft Windows from a special flash drive; U3-compatible devices are designed to autoload a menu when plugged into a computer running Windows. Applications must be modified for the U3 platform not to leave any data on the host machine. U3 also provides a software framework for independent software vendors interested in their platform. Ceedo is an alternative product that does not require Windows applications to be modified in order for them to be carried and run on the drive. Similarly, other application virtualization solutions and portable application creators, such as VMware ThinApp (for Windows) or RUNZ (for Linux) can be used to run software from a flash drive without installation. In October 2010, Apple Inc. released their newest iteration of the MacBook Air, which had the system's restore files contained on a USB hard drive rather than the traditional install CDs, because the Air did not include an optical drive. A wide range of portable applications, which are all free of charge, and able to run off a computer running Windows without storing anything on the host computer's drives or registry, can be found in the list of portable software. Backup Some value-added resellers are now using a flash drive as part of small-business turnkey solutions (e.g., point-of-sale systems). The drive is used as a backup medium: at the close of business each night, the drive is inserted, and a database backup is saved to the drive. Alternatively, the drive can be left inserted through the business day, and data regularly updated. In either case, the drive is removed at night and taken offsite. This is simple for the end-user, and more likely to be done. The drive is small and convenient, and more likely to be carried off-site for safety. The drives are less fragile mechanically and magnetically than tapes. The capacity is often large enough for several backup images of critical data. Flash drives are cheaper than many other backup systems. Flash drives also have disadvantages. They are easy to lose and facilitate unauthorized backups. A lesser setback for flash drives is that they have only one tenth the capacity of hard drives manufactured around their time of distribution. Password Reset Disk Password Reset Disk is a feature of the Windows operating system. If a user sets up a Password Reset Disk, it can be used to reset the password on the computer it was set up on. Audio players Many companies make small solid-state digital audio players, essentially producing flash drives with sound output and a simple user interface. Examples include the Creative MuVo, Philips GoGear and the first generation iPod shuffle. Some of these players are true USB flash drives as well as music players; others do not support general-purpose data storage. Other applications requiring storage, such as digital voice or sound recording, can also be combined with flash drive functionality. Many of the smallest players are powered by a permanently fitted rechargeable battery, charged from the USB interface. Fancier devices that function as a digital audio player have a USB host port (type A female typically). Media storage and marketing Digital audio files can be transported from one computer to another like any other file, and played on a compatible media player (with caveats for DRM-locked files). In addition, many home Hi-Fi and car stereo head units are now equipped with a USB port. This allows a USB flash drive containing media files in a variety of formats to be played directly on devices which support the format. Some LCD monitors for consumer HDTV viewing have a dedicated USB port through which music and video files can also be played without use of a personal computer. Artists have sold or given away USB flash drives, with the first instance believed to be in 2004 when the German punk band Wizo released the Stick EP, only as a USB drive. In addition to five high-bitrate MP3s, it also included a video, pictures, lyrics, and guitar tablature. Subsequently, artists including Nine Inch Nails and Kylie Minogue have released music and promotional material on USB flash drives. The first USB album to be released in the UK was Kiss Does... Rave, a compilation album released by the Kiss Network in April 2007. Brand and product promotion The availability of inexpensive flash drives has enabled them to be used for promotional and marketing purposes, particularly within technical and computer-industry circles (e.g., technology trade shows). They may be given away for free, sold at less than wholesale price, or included as a bonus with another purchased product. Usually, such drives will be custom-stamped with a company's logo, as a form of advertising. The drive may be blank, or preloaded with graphics, documentation, web links, Flash animation or other multimedia, and free or demonstration software. Some preloaded drives are read-only, while others are configured with both read-only and user-writable segments. Such dual-partition drives are more expensive. Flash drives can be set up to automatically launch stored presentations, websites, articles, and any other software immediately on insertion of the drive using the Microsoft Windows AutoRun feature. Autorunning software this way does not work on all computers, and it is normally disabled by security-conscious users. Arcades In the arcade game In the Groove and more commonly In The Groove 2, flash drives are used to transfer high scores, screenshots, dance edits, and combos throughout sessions. As of software revision 21 (R21), players can also store custom songs and play them on any machine on which this feature is enabled. While use of flash drives is common, the drive must be Linux compatible. In the arcade games Pump it Up NX2 and Pump it Up NXA, a specially produced flash drive is used as a "save file" for unlocked songs, as well as for progressing in the WorldMax and Brain Shower sections of the game. In the arcade game Dance Dance Revolution X, an exclusive USB flash drive was made by Konami for the purpose of the link feature from its Sony PlayStation 2 counterpart. However, any USB flash drive can be used in this arcade game. Conveniences Flash drives use little power, have no fragile moving parts, and for most capacities are small and light. Data stored on flash drives is impervious to mechanical shock, magnetic fields, scratches and dust. These properties make them suitable for transporting data from place to place and keeping the data readily at hand. Flash drives also store data densely compared to many removable media. In mid-2009, 256 GB drives became available, with the ability to hold many times more data than a DVD (54 DVDs) or even a Blu-ray (10 BDs). Flash drives implement the USB mass storage device class so that most modern operating systems can read and write to them without installing device drivers. The flash drives present a simple block-structured logical unit to the host operating system, hiding the individual complex implementation details of the various underlying flash memory devices. The operating system can use any file system or block addressing scheme. Some computers can boot up from flash drives. Specially manufactured flash drives are available that have a tough rubber or metal casing designed to be waterproof and virtually "unbreakable". These flash drives retain their memory after being submerged in water, and even through a machine wash. Leaving such a flash drive out to dry completely before allowing current to run through it has been known to result in a working drive with no future problems. Channel Five's Gadget Show cooked one of these flash drives with propane, froze it with dry ice, submerged it in various acidic liquids, ran over it with a jeep and fired it against a wall with a mortar. A company specializing in recovering lost data from computer drives managed to recover all the data on the drive. All data on the other removable storage devices tested, using optical or magnetic technologies, were destroyed. Comparison with other portable storage Tape The applications of current data tape cartridges hardly overlap those of flash drives: on tape, cost per gigabyte is very low for large volumes, but the individual drives and media are expensive. Media have a very high capacity and very fast transfer speeds, but store data sequentially and are very slow for random access of data. While disk-based backup is now the primary medium of choice for most companies, tape backup is still popular for taking data off-site for worst-case scenarios and for very large volumes (more than a few hundreds of TB). See LTO tapes. Floppy disk Floppy disk drives are rarely fitted to modern computers and are obsolete for normal purposes, although internal and external drives can be fitted if required. Floppy disks may be the method of choice for transferring data to and from very old computers without USB or booting from floppy disks, and so they are sometimes used to change the firmware on, for example, BIOS chips. Devices with removable storage like older Yamaha music keyboards are also dependent on floppy disks, which require computers to process them. Newer devices are built with USB flash drive support. Floppy disk hardware emulators exist which effectively utilize the internal connections and physical attributes of a floppy disk drive to utilize a device where a USB flash drive emulates the storage space of a floppy disk in a solid state form, and can be divided into a number of individual virtual floppy disk images using individual data channels. Optical media The various writable and re-writable forms of CD and DVD are portable storage media supported by the vast majority of computers as of 2008. CD-R, DVD-R, and DVD+R can be written to only once, RW varieties up to about 1,000 erase/write cycles, while modern NAND-based flash drives often last for 500,000 or more erase/write cycles. DVD-RAM discs are the most suitable optical discs for data storage involving much rewriting. Optical storage devices are among the cheapest methods of mass data storage after the hard drive. They are slower than their flash-based counterparts. Standard 120 mm optical discs are larger than flash drives and more subject to damage. Smaller optical media do exist, such as business card CD-Rs which have the same dimensions as a credit card, and the slightly less convenient but higher capacity 80 mm recordable MiniCD and Mini DVD. The small discs are more expensive than the standard size, and do not work in all drives. Universal Disk Format (UDF) version 1.50 and above has facilities to support rewritable discs like sparing tables and virtual allocation tables, spreading usage over the entire surface of a disc and maximising life, but many older operating systems do not support this format. Packet-writing utilities such as DirectCD and InCD are available but produce discs that are not universally readable (although based on the UDF standard). The Mount Rainier standard addresses this shortcoming in CD-RW media by running the older file systems on top of it and performing defect management for those standards, but it requires support from both the CD/DVD burner and the operating system. Many drives made today do not support Mount Rainier, and many older operating systems such as Windows XP and below, and Linux kernels older than 2.6.2, do not support it (later versions do). Essentially CDs/DVDs are a good way to record a great deal of information cheaply and have the advantage of being readable by most standalone players, but they are poor at making ongoing small changes to a large collection of information. Flash drives' ability to do this is their major advantage over optical media. Flash memory cards Flash memory cards, e.g., Secure Digital cards, are available in various formats and capacities, and are used by many consumer devices. However, while virtually all PCs have USB ports, allowing the use of USB flash drives, memory card readers are not commonly supplied as standard equipment (particularly with desktop computers). Although inexpensive card readers are available that read many common formats, this results in two pieces of portable equipment (card plus reader) rather than one. Some manufacturers, aiming at a "best of both worlds" solution, have produced card readers that approach the size and form of USB flash drives (e.g., Kingston MobileLite, SanDisk MobileMate) These readers are limited to a specific subset of memory card formats (such as SD, microSD, or Memory Stick), and often completely enclose the card, offering durability and portability approaching, if not quite equal to, that of a flash drive. Although the combined cost of a mini-reader and a memory card is usually slightly higher than a USB flash drive of comparable capacity, the reader + card solution offers additional flexibility of use, and virtually "unlimited" capacity. The ubiquity of SD cards is such that, circa 2011, due to economies of scale, their price is now less than an equivalent-capacity USB flash drive, even with the added cost of a USB SD card reader. An additional advantage of memory cards is that many consumer devices (e.g., digital cameras, portable music players) cannot make use of USB flash drives (even if the device has a USB port), whereas the memory cards used by the devices can be read by PCs with a card reader. External hard disk Particularly with the advent of USB, external hard disks have become widely available and inexpensive. External hard disk drives currently cost less per gigabyte than flash drives and are available in larger capacities. Some hard drives support alternative and faster interfaces than USB 2.0 (e.g., Thunderbolt, FireWire and eSATA). For consecutive sector writes and reads (for example, from an unfragmented file), most hard drives can provide a much higher sustained data rate than current NAND flash memory, though mechanical latencies seriously impact hard drive performance. Unlike solid-state memory, hard drives are susceptible to damage by shock (e.g., a short fall) and vibration, have limitations on use at high altitude, and although shielded by their casings, are vulnerable when exposed to strong magnetic fields. In terms of overall mass, hard drives are usually larger and heavier than flash drives; however, hard disks sometimes weigh less per unit of storage. Like flash drives, hard disks also suffer from file fragmentation, which can reduce access speed. External solid-state drive Compared to external solid-state drives, USB flash drives are usually built using lower-cost and lower-performance flash memory, resulting in lower overall performance. Obsolete devices Audio tape cassettes and high-capacity floppy disks (e.g., Imation SuperDisk), and other forms of drives with removable magnetic media, such as the Iomega Zip drive and Jaz drives, are now largely obsolete and rarely used. There are products in today's market that will emulate these legacy drives for both tape and disk (SCSI1/SCSI2, SASI, Magneto optic, Ricoh ZIP, Jaz, IBM3590/ Fujitsu 3490E and Bernoulli for example) in state-of-the-art Compact Flash storage devices – CF2SCSI. Encryption and security As highly portable media, USB flash drives are easily lost or stolen. All USB flash drives can have their contents encrypted using third-party disk encryption software, which can often be run directly from the USB drive without installation (for example, FreeOTFE), although some, such as BitLocker, require the user to have administrative rights on every computer it is run on. Archiving software can achieve a similar result by creating encrypted ZIP or RAR files. Some manufacturers have produced USB flash drives which use hardware-based encryption as part of the design, removing the need for third-party encryption software. In limited circumstances these drives have been shown to have security problems, and are typically more expensive than software-based systems, which are available for free. A minority of flash drives support biometric fingerprinting to confirm the user's identity. As of mid-, this was an expensive alternative to standard password protection offered on many new USB flash storage devices. Most fingerprint scanning drives rely upon the host operating system to validate the fingerprint via a software driver, often restricting the drive to Microsoft Windows computers. However, there are USB drives with fingerprint scanners which use controllers that allow access to protected data without any authentication. Some manufacturers deploy physical authentication tokens in the form of a flash drive. These are used to control access to a sensitive system by containing encryption keys or, more commonly, communicating with security software on the target machine. The system is designed so the target machine will not operate except when the flash drive device is plugged into it. Some of these "PC lock" devices also function as normal flash drives when plugged into other machines. Controversies Criticisms Failures Like all flash memory devices, flash drives can sustain only a limited number of write and erase cycles before the drive fails. This should be a consideration when using a flash drive to run application software or an operating system. To address this, as well as space limitations, some developers have produced special versions of operating systems (such as Linux in Live USB) or commonplace applications (such as Mozilla Firefox) designed to run from flash drives. These are typically optimized for size and configured to place temporary or intermediate files in the computer's main RAM rather than store them temporarily on the flash drive. When used in the same manner as external rotating drives (hard drives, optical drives, or floppy drives), i.e. in ignorance of their technology, USB drives' failure is more likely to be sudden: while rotating drives can fail instantaneously, they more frequently give some indication (noises, slowness) that they are about to fail, often with enough advance warning that data can be removed before total failure. USB drives give little or no advance warning of failure. Furthermore, when internal wear-leveling is applied to prolong life of the flash drive, once failure of even part of the memory occurs it can be difficult or impossible to use the remainder of the drive, which differs from magnetic media, where bad sectors can be marked permanently not to be used. Most USB flash drives do not include a write protection mechanism. This feature, which gradually became less common, consists of a switch on the housing of the drive itself, that prevents the host computer from writing or modifying data on the drive. For example, write protection makes a device suitable for repairing virus-contaminated host computers without the risk of infecting a USB flash drive itself. In contrast to SD cards, write protection on USB flash drives (when available) is connected to the drive circuitry, and is handled by the drive itself instead of the host (on SD cards handling of the write-protection notch is optional). A drawback to the small physical size of flash drives is that they are easily misplaced or otherwise lost. This is a particular problem if they contain sensitive data (see data security). As a consequence, some manufacturers have added encryption hardware to their drives, although software encryption systems which can be used in conjunction with any mass storage medium will achieve the same result. Most drives can be attached to keychains or lanyards. The USB plug is usually retractable or fitted with a removable protective cap. Security threats USB killer Similar in appearance to a USB flash drive, a USB killer is a circuit which charges its capacitors to a high voltage using the power supply pins of a USB port, then discharges that voltage through the data pins. This standalone device can instantly and permanently damage or destroy any host hardware that it is connected to. "Handmade" USB drives "Handmade" USB drives, containing movies and other related content, have also been reported. Current and future developments Semiconductor corporations have worked to reduce the cost of the components in a flash drive by integrating various flash drive functions in a single chip, thereby reducing the part-count and overall package-cost. Flash drive capacities on the market increase continually. High speed has become a standard for modern flash drives. Capacities exceeding 256 GB were available on the market as early as 2009. Lexar attempted to introduce a USB FlashCard, which would be a compact USB flash drive intended to replace various kinds of flash memory cards. Pretec introduced a similar card, which also plugs into any USB port, but is just one quarter the thickness of the Lexar model. Until 2008, SanDisk manufactured a product called SD Plus, which was a SecureDigital card with a USB connector. SanDisk introduced a digital rights management technology called FlashCP that they had purchased in 2005 to control the storage and usage of copyrighted materials on flash drives, primarily for use by students.
Technology
Non-volatile memory
null
401005
https://en.wikipedia.org/wiki/Marmot
Marmot
Marmots are large ground squirrels in the genus Marmota, with 15 species living in Asia, Europe, and North America. These herbivores are active during the summer, when they can often be found in groups, but are not seen during the winter, when they hibernate underground. They are the heaviest members of the squirrel family. Description Marmots are large rodents with characteristically short but robust legs, enlarged claws which are well adapted to digging, stout bodies, and large heads and incisors to quickly process a variety of vegetation. While most species are various forms of earthen-hued brown, marmots vary in fur coloration based roughly on their surroundings. Species in more open habitat are more likely to have a paler color, while those sometimes found in well-forested regions tend to be darker. Marmots are the heaviest members of the squirrel family. Total length varies typically from about and body mass averages about in spring in the smaller species and in autumn, at times exceeding , in the larger species. The largest and smallest species are not clearly known. In North America, on the basis of mean linear dimensions and body masses through the year, the smallest species appears to be the Alaska marmot and the largest is the Olympic marmot. Some species, such as the Himalayan marmot and Tarbagan marmot in Asia, appear to attain roughly similar body masses to the Olympic marmot, but are not known to reach as high a total length as the Olympic species. In the traditional definition of hibernation, the largest marmots are considered the largest "true hibernators" (since larger "hibernators" such as bears do not have the same physiological characteristics as obligate hibernating animals such as assorted rodents, bats and insectivores). Biology Some species live in mountainous areas, such as the Alps, northern Apennines, Carpathians, Tatras, and Pyrenees in Europe; northwestern Asia; the Rocky Mountains, Black Hills, the Cascade and Pacific Ranges, and the Sierra Nevada in North America; and the Deosai Plateau in Pakistan and Ladakh in India. Other species prefer rough grassland and can be found widely across North America and the Eurasian Steppe. The slightly smaller and more social prairie dog is not classified in the genus Marmota, but in the related genus Cynomys. Marmots typically live in burrows (often within rockpiles, particularly in the case of the yellow-bellied marmot), and hibernate there through the winter. Most marmots are highly social and use loud whistles to communicate with one another, especially when alarmed. Marmots mainly eat greens and many types of grasses, berries, lichens, mosses, roots, and flowers. Subgenera and species The following is a list of all Marmota species recognized by Thorington and Hoffman plus the recently defined M. kastschenkoi. They divide marmots into two subgenera. Some extinct species of marmots are recognized from the fossil record, for example: †Marmota arizonae, Arizona, U.S. †Marmota minor, Nevada, U.S. †Marmota vetus, Nebraska, U.S. History and etymology Marmots have been known since antiquity. Research by the French ethnologist Michel Peissel claimed the story of the "Gold-digging ant" reported by the Ancient Greek historian Herodotus, who lived in the fifth century BCE, was founded on the golden Himalayan marmot of the Deosai Plateau and the habit of local tribes such as the Brokpa to collect the gold dust excavated from their burrows. Some historians believe that Strabo's λέων μύρμηξ and Agatharchides's μυρμηκολέων, most probably are the marmot. An anatomically accurate image of a marmot was printed and distributed as early as 1605 by Jacopo Ligozzi, who was noted for his images of flora and fauna. The etymology of the term "marmot" is uncertain. It may have arisen from the Gallo-Romance prefix marm-, meaning to mumble or murmur (an example of onomatopoeia). Another possible origin is postclassical Latin, mus montanus, meaning "mountain mouse". Beginning in 2010, Alaska celebrates February 2 as "Marmot Day", a holiday intended to observe the prevalence of marmots in that state and take the place of Groundhog Day. Relationship to the Black Death Some historians and paleogeneticists have postulated that the Yersinia pestis variant that caused the Black Death pandemic that struck Eurasia in the 14th century originated from a variant for which marmots in China were the natural reservoir species.
Biology and health sciences
Rodents
null
401150
https://en.wikipedia.org/wiki/Photoreceptor%20cell
Photoreceptor cell
A photoreceptor cell is a specialized type of neuroepithelial cell found in the retina that is capable of visual phototransduction. The great biological importance of photoreceptors is that they convert light (visible electromagnetic radiation) into signals that can stimulate biological processes. To be more specific, photoreceptor proteins in the cell absorb photons, triggering a change in the cell's membrane potential. There are currently three known types of photoreceptor cells in mammalian eyes: rods, cones, and intrinsically photosensitive retinal ganglion cells. The two classic photoreceptor cells are rods and cones, each contributing information used by the visual system to form an image of the environment, sight. Rods primarily mediate scotopic vision (dim conditions) whereas cones primarily mediate photopic vision (bright conditions), but the processes in each that supports phototransduction is similar. The intrinsically photosensitive retinal ganglion cells were discovered during the 1990s. These cells are thought not to contribute to sight directly, but have a role in the entrainment of the circadian rhythm and the pupillary reflex. Photosensitivity Each photoreceptor absorbs light according to its spectral sensitivity (absorptance), which is determined by the photoreceptor proteins expressed in that cell. Humans have three classes of cones (L, M, S) that each differ in spectral sensitivity and 'prefer' photons of different wavelengths (see graph). For example, the peak wavelength of the S-cone's spectral sensitivity is approximately 420 nm (nanometers, a measure of wavelength), so it is more likely to absorb a photon at 420 nm than at any other wavelength. Light of a longer wavelength can also produce the same response from an S-cone, but it would have to be brighter to do so. In accordance with the principle of univariance, a photoreceptor's output signal is proportional only to the number of photons absorbed. The photoreceptors can not measure the wavelength of light that it absorbs and therefore does not detect color on its own. Rather, it is the ratios of responses of the three types of cone cells that can estimate wavelength, and therefore enable color vision. Histology Rod and cone photoreceptors are found on the outermost layer of the retina; they both have the same basic structure. Closest to the visual field (and farthest from the brain) is the axon terminal, which releases a neurotransmitter called glutamate to bipolar cells. Farther back is the cell body, which contains the cell's organelles. Farther back still is the inner segment, a specialized part of the cell full of mitochondria. The chief function of the inner segment is to provide ATP (energy) for the sodium-potassium pump. Finally, closest to the brain (and farthest from the field of view) is the outer segment, the part of the photoreceptor that absorbs light. Outer segments are actually modified cilia that contain disks filled with opsin, the molecule that absorbs photons, as well as voltage-gated sodium channels. The membranous photoreceptor protein opsin contains a pigment molecule called retinal. In rod cells, these together are called rhodopsin. In cone cells, there are different types of opsins that combine with retinal to form pigments called photopsins. Three different classes of photopsins in the cones react to different ranges of light frequency, a selectivity that allows the visual system to transduce color. The function of the photoreceptor cell is to convert the light information of the photon into a form of information communicable to the nervous system and readily usable to the organism: This conversion is called signal transduction. The opsin found in the intrinsically photosensitive ganglion cells of the retina is called melanopsin. These cells are involved in various reflexive responses of the brain and body to the presence of (day)light, such as the regulation of circadian rhythms, pupillary reflex and other non-visual responses to light. Melanopsin functionally resembles invertebrate opsins. Retinal mosaic Most vertebrate photoreceptors are located in the retina. The distribution of rods and cones (and classes thereof) in the retina is called the retinal mosaic. Each human retina has approximately 6 million cones and 120 million rods. At the "center" of the retina (the point directly behind the lens) lies the fovea (or fovea centralis), which contains only cone cells; and is the region capable of producing the highest visual acuity or highest resolution. Across the rest of the retina, rods and cones are intermingled. No photoreceptors are found at the blind spot, the area where ganglion cell fibers are collected into the optic nerve and leave the eye. The distribution of cone classes (L, M, S) are also nonhomogenous, with no S-cones in the fovea, and the ratio of L-cones to M-cones differing between individuals. The number and ratio of rods to cones varies among species, dependent on whether an animal is primarily diurnal or nocturnal. Certain owls, such as the nocturnal tawny owl, have a tremendous number of rods in their retinae. Other vertebrates will also have a different number of cone classes, ranging from monochromats to pentachromats. Signaling The path of a visual signal is described by the phototransduction cascade, the mechanism by which the energy of a photon signals a mechanism in the cell that leads to its electrical polarization. This polarization ultimately leads to either the transmittance or inhibition of a neural signal that will be fed to the brain via the optic nerve. The steps that apply to the phototransduction pathway from vertebrate rod/cone photoreceptors are: The Vertebrate visual opsin in the disc membrane of the outer segment absorbs a photon, changing the configuration of a retinal Schiff base cofactor inside the protein from the cis-form to the trans-form, causing the retinal to change shape. This results in a series of unstable intermediates, the last of which binds stronger to a G protein in the membrane, called transducin, and activates it. This is the first amplification step – each photoactivated opsin triggers activation of about 100 transducins. Each transducin then activates the enzyme cGMP-specific phosphodiesterase (PDE). PDE then catalyzes the hydrolysis of cGMP to 5' GMP. This is the second amplification step, where a single PDE hydrolyses about 1000 cGMP molecules. The net concentration of intracellular cGMP is reduced (due to its conversion to 5' GMP via PDE), resulting in the closure of cyclic nucleotide-gated Na+ ion channels located in the photoreceptor outer segment membrane. As a result, sodium ions can no longer enter the cell, and the photoreceptor outer segment membrane becomes hyperpolarized, due to the charge inside the membrane becoming more negative. This change in the cell's membrane potential causes voltage-gated calcium channels to close. This leads to a decrease in the influx of calcium ions into the cell and thus the intracellular calcium ion concentration falls. A decrease in the intracellular calcium concentration means that less glutamate is released via calcium-induced exocytosis to the bipolar cell (see below). (The decreased calcium level slows the release of the neurotransmitter glutamate, which excites the postsynaptic bipolar cells and horizontal cells.) ATP provided by the inner segment powers the sodium-potassium pump. This pump is necessary to reset the initial state of the outer segment by taking the sodium ions that are entering the cell and pumping them back out. Hyperpolarization Unlike most sensory receptor cells, photoreceptors actually become hyperpolarized when stimulated; and conversely are depolarized when not stimulated. This means that glutamate is released continuously when the cell is unstimulated, and stimulus causes release to stop. In the dark, cells have a relatively high concentration of cyclic guanosine 3'-5' monophosphate (cGMP), which opens cGMP-gated ion channels. These channels are nonspecific, allowing movement of both sodium and calcium ions when open. The movement of these positively charged ions into the cell (driven by their respective electrochemical gradient) depolarizes the membrane, and leads to the release of the neurotransmitter glutamate. Unstimulated (in the dark), cyclic-nucleotide gated channels in the outer segment are open because cyclic GMP (cGMP) is bound to them. Hence, positively charged ions (namely sodium ions) enter the photoreceptor, depolarizing it to about −40 mV (resting potential in other nerve cells is usually −65 mV). This depolarization current is often known as dark current. Bipolar cells The photoreceptors (rods and cones) transmit to the bipolar cells, which transmit then to the retinal ganglion cells. Retinal ganglion cell axons collectively form the optic nerve, via which they project to the brain. The rod and cone photoreceptors signal their absorption of photons via a decrease in the release of the neurotransmitter glutamate to bipolar cells at its axon terminal. Since the photoreceptor is depolarized in the dark, a high amount of glutamate is being released to bipolar cells in the dark. Absorption of a photon will hyperpolarize the photoreceptor and therefore result in the release of less glutamate at the presynaptic terminal to the bipolar cell. Every rod or cone photoreceptor releases the same neurotransmitter, glutamate. However, the effect of glutamate differs in the bipolar cells, depending upon the type of receptor imbedded in that cell's membrane. When glutamate binds to an ionotropic receptor, the bipolar cell will depolarize (and therefore will hyperpolarize with light as less glutamate is released). On the other hand, binding of glutamate to a metabotropic receptor results in a hyperpolarization, so this bipolar cell will depolarize to light as less glutamate is released. In essence, this property allows for one population of bipolar cells that gets excited by light and another population that gets inhibited by it, even though all photoreceptors show the same response to light. This complexity becomes both important and necessary for detecting color, contrast, edges, etc. Advantages Phototransduction in rods and cones is somewhat unusual in that the stimulus (in this case, light) reduces the cell's response or firing rate, different from most other sensory systems in which a stimulus increases the cell's response or firing rate. This difference has important functional consequences: the classic (rod or cone) photoreceptor is depolarized in the dark, which means many sodium ions are flowing into the cell. Thus, the random opening or closing of sodium channels will not affect the membrane potential of the cell; only the closing of a large number of channels, through absorption of a photon, will affect it and signal that light is in the visual field. This system may have less noise relative to sensory transduction schema that increase rate of neural firing in response to stimulus, like touch and olfaction. there is a lot of amplification in two stages of classic phototransduction: one pigment will activate many molecules of transducin, and one PDE will cleave many cGMPs. This amplification means that even the absorption of one photon will affect membrane potential and signal to the brain that light is in the visual field. This is the main feature that differentiates rod photoreceptors from cone photoreceptors. Rods are extremely sensitive and have the capacity of registering a single photon of light, unlike cones. On the other hand, cones are known to have very fast kinetics in terms of rate of amplification of phototransduction, unlike rods. Difference between rods and cones Comparison of human rod and cone cells, from Eric Kandel et al. in Principles of Neural Science. Development The key events mediating rod versus S cone versus M cone differentiation are induced by several transcription factors, including RORbeta, OTX2, NRL, CRX, NR2E3 and TRbeta2. The S cone fate represents the default photoreceptor program; however, differential transcriptional activity can bring about rod or M cone generation. L cones are present in primates, however there is not much known for their developmental program due to use of rodents in research. There are five steps to developing photoreceptors: proliferation of multi-potent retinal progenitor cells (RPCs); restriction of competence of RPCs; cell fate specification; photoreceptor gene expression; and lastly axonal growth, synapse formation and outer segment growth. Early Notch signaling maintains progenitor cycling. Photoreceptor precursors come about through inhibition of Notch signaling and increased activity of various factors including achaete-scute homologue 1. OTX2 activity commits cells to the photoreceptor fate. CRX further defines the photoreceptor specific panel of genes being expressed. NRL expression leads to the rod fate. NR2E3 further restricts cells to the rod fate by repressing cone genes. RORbeta is needed for both rod and cone development. TRbeta2 mediates the M cone fate. If any of the previously mentioned factors' functions are ablated, the default photoreceptor is a S cone. These events take place at different time periods for different species and include a complex pattern of activities that bring about a spectrum of phenotypes. If these regulatory networks are disrupted, retinitis pigmentosa, macular degeneration or other visual deficits may result. Ganglion cell photoreceptors Intrinsically photosensitive retinal ganglion cells (ipRGCs) are a subset (≈1–3%) of retinal ganglion cells, unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin, a light-sensitive protein. Therefore they constitute a third class of photoreceptors, in addition to rod and cone cells. In humans the ipRGCs contribute to non-image-forming functions like circadian rhythms, behavior and pupillary light reflex. Peak spectral sensitivity of the receptor is between 460 and 482 nm. However, they may also contribute to a rudimentary visual pathway enabling conscious sight and brightness detection. Classic photoreceptors (rods and cones) also feed into the novel visual system, which may contribute to color constancy. ipRGCs could be instrumental in understanding many diseases including major causes of blindness worldwide like glaucoma, a disease that affects ganglion cells, and the study of the receptor offered potential as a new avenue to explore in trying to find treatments for blindness. ipRGCs were only definitively detected ipRGCs in humans during landmark experiments in 2007 on rodless, coneless humans. As had been found in other mammals, the identity of the non-rod non-cone photoreceptor in humans was found to be a ganglion cell in the inner retina. The researchers had tracked down patients with rare diseases wiping out classic rod and cone photoreceptor function but preserving ganglion cell function. Despite having no rods or cones the patients continued to exhibit circadian photoentrainment, circadian behavioural patterns, melanopsin suppression, and pupil reactions, with peak spectral sensitivities to environmental and experimental light matching that for the melanopsin photopigment. Their brains could also associate vision with light of this frequency. Non-human photoreceptors Rod and cone photoreceptors are common to almost all vertebrates. The pineal and parapineal glands are photoreceptive in non-mammalian vertebrates, but not in mammals. Birds have photoactive cerebrospinal fluid (CSF)-contacting neurons within the paraventricular organ that respond to light in the absence of input from the eyes or neurotransmitters. Invertebrate photoreceptors in organisms such as insects and molluscs are different in both their morphological organization and their underlying biochemical pathways. This article describes human photoreceptors.
Biology and health sciences
Visual system
Biology
401885
https://en.wikipedia.org/wiki/Asteroseismology
Asteroseismology
Asteroseismology is the study of oscillations in stars. Stars have many resonant modes and frequencies, and the path of sound waves passing through a star depends on the local speed of sound, which in turn depends on local temperature and chemical composition. Because the resulting oscillation modes are sensitive to different parts of the star, they inform astronomers about the internal structure of the star, which is otherwise not directly possible from overall properties like brightness and surface temperature. Asteroseismology is closely related to helioseismology, the study of stellar pulsation specifically in the Sun. Though both are based on the same underlying physics, more and qualitatively different information is available for the Sun because its surface can be resolved. Theoretical background By linearly perturbing the equations defining the mechanical equilibrium of a star (i.e. mass conservation and hydrostatic equilibrium) and assuming that the perturbations are adiabatic, one can derive a system of four differential equations whose solutions give the frequency and structure of a star's modes of oscillation. The stellar structure is usually assumed to be spherically symmetric, so the horizontal (i.e. non-radial) component of the oscillations is described by spherical harmonics, indexed by an angular degree and azimuthal order . In non-rotating stars, modes with the same angular degree must all have the same frequency because there is no preferred axis. The angular degree indicates the number of nodal lines on the stellar surface, so for large values of , the opposing sectors roughly cancel out, making it difficult to detect light variations. As a consequence, modes can only be detected up to an angular degree of about 3 in intensity and about 4 if observed in radial velocity. By additionally assuming that the perturbation to the gravitational potential is negligible (the Cowling approximation) and that the star's structure varies more slowly with radius than the oscillation mode, the equations can be reduced approximately to one second-order equation for the radial component of the displacement eigenfunction , where is the radial co-ordinate in the star, is the angular frequency of the oscillation mode, is the sound speed inside the star, is the Brunt–Väisälä or buoyancy frequency and is the Lamb frequency. The last two are defined by and respectively. By analogy with the behaviour of simple harmonic oscillators, this implies that oscillating solutions exist when the frequency is either greater or less than both and . We identify the former case as high-frequency pressure modes (p-modes) and the latter as low-frequency gravity modes (g-modes). This basic separation allows us to determine (to reasonable accuracy) where we expect what kind of mode to resonate in a star. By plotting the curves and (for given ), we expect p-modes to resonate at frequencies below both curves or frequencies above both curves. Excitation mechanisms Kappa-mechanism Under fairly specific conditions, some stars have regions where heat is transported by radiation and the opacity is a sharply decreasing function of temperature. This opacity bump can drive oscillations through the -mechanism (or Eddington valve). Suppose that, at the beginning of an oscillation cycle, the stellar envelope has contracted. By expanding and cooling slightly, the layer in the opacity bump becomes more opaque, absorbs more radiation, and heats up. This heating causes expansion, further cooling and the layer becomes even more opaque. This continues until the material opacity stops increasing so rapidly, at which point the radiation trapped in the layer can escape. The star contracts and the cycle prepares to commence again. In this sense, the opacity acts like a valve that traps heat in the star's envelope. Pulsations driven by the -mechanism are coherent and have relatively large amplitudes. It drives the pulsations in many of the longest-known variable stars, including the Cepheid and RR Lyrae variables. Surface convection In stars with surface convection zones, turbulent fluids motions near the surface simultaneously excite and damp oscillations across a broad range of frequency. Because the modes are intrinsically stable, they have low amplitudes and are relatively short-lived. This is the driving mechanism in all solar-like oscillators. Convective blocking If the base of a surface convection zone is sharp and the convective timescales slower than the pulsation timescales, the convective flows react too slowly to perturbations that can build up into large, coherent pulsations. This mechanism is known as convective blocking and is believed to drive pulsations in the Doradus variables. Tidal excitation Observations from the Kepler satellite revealed eccentric binary systems in which oscillations are excited during the closest approach. These systems are known as heartbeat stars because of the characteristic shape of the lightcurves. Types of oscillators Solar-like oscillators Because solar oscillations are driven by near-surface convection, any stellar oscillations caused similarly are known as solar-like oscillations and the stars themselves as solar-like oscillators. However, solar-like oscillations also occur in evolved stars (subgiants and red giants), which have convective envelopes, even though the stars are not Sun-like. Cepheid variables Cepheid variables are one of the most important classes of pulsating star. They are core-helium burning stars with masses above about 5 solar masses. They principally oscillate at their fundamental modes, with typical periods ranging from days to months. Their pulsation periods are closely related to their luminosities, so it is possible to determine the distance to a Cepheid by measuring its oscillation period, computing its luminosity, and comparing this to its observed brightness. Cepheid pulsations are excited by the kappa mechanism acting on the second ionization zone of helium. RR Lyrae variables RR Lyraes are similar to Cepheid variables but of lower metallicity (i.e. Population II) and much lower masses (about 0.6 to 0.8 time solar). They are core helium-burning giants that oscillate in one or both of their fundamental mode or first overtone. The oscillation are also driven by the kappa mechanism acting through the second ionization of helium. Many RR Lyraes, including RR Lyrae itself, show long period amplitude modulations, known as the Blazhko effect. Delta Scuti and Gamma Doradus stars Delta Scuti variables are found roughly where the classical instability strip intersects the main sequence. They are typically A- to early F-type dwarfs and subgiants and the oscillation modes are low-order radial and non-radial pressure modes, with periods ranging from 0.25 to 8 hours and magnitude variations anywhere between. Like Cepheid variables, the oscillations are driven by the kappa mechanism acting on the second ionization of helium. SX Phoenicis variables are regarded as metal-poor relatives of Delta Scuti variables. Gamma Doradus variables occur in similar stars to the red end of the Delta Scuti variables, usually of early F-type. The stars show multiple oscillation frequencies between about 0.5 and 3 days, which is much slower than the low-order pressure modes. Gamma Doradus oscillations are generally thought to be high-order gravity modes, excited by convective blocking. Following results from Kepler, it appears that many Delta Scuti stars also show Gamma Doradus oscillations and are therefore hybrids. Rapidly oscillating Ap (roAp) stars Rapidly oscillating Ap stars have similar parameters to Delta Scuti variables, mostly being A- and F-type, but they are also strongly magnetic and chemically peculiar (hence the p spectral subtype). Their dense mode spectra are understood in terms of the oblique pulsator model: the mode's frequencies are modulated by the magnetic field, which is not necessarily aligned with the star's rotation (as is the case in the Earth). The oscillation modes have frequencies around 1500 μHz and amplitudes of a few mmag. Slowly pulsating B stars and Beta Cephei variables Slowly pulsating B (SPB) stars are B-type stars with oscillation periods of a few days, understood to be high-order gravity modes excited by the kappa mechanism. Beta Cephei variables are slightly hotter (and thus more massive), also have modes excited by the kappa mechanism and additionally oscillate in low-order gravity modes with periods of several hours. Both classes of oscillators contain only slowly rotating stars. Variable subdwarf B stars Subdwarf B (sdB) stars are in essence the cores of core-helium burning giants who have somehow lost most of their hydrogen envelopes, to the extent that there is no hydrogen-burning shell. They have multiple oscillation periods that range between about 1 and 10 minutes and amplitudes anywhere between 0.001 and 0.3 mag in visible light. The oscillations are low-order pressure modes, excited by the kappa mechanism acting on the iron opacity bump. White dwarfs White dwarfs are characterized by spectral type, much like ordinary stars, except that the relationship between spectral type and effective temperature does not correspond in the same way. Thus, white dwarfs are known by types DO, DA and DB. Cooler types are physically possible but the Universe is too young for them to have cooled enough. White dwarfs of all three types are found to pulsate. The pulsators are known as GW Virginis stars (DO variables, sometimes also known as PG 1159 stars), V777 Herculis stars (DB variables) and ZZ Ceti stars (DA variables). All pulsate in low-degree, high-order g-modes. The oscillation periods broadly decrease with effective temperature, ranging from about 30 min down to about 1 minute. GW Virginis and ZZ Ceti stars are thought to be excited by the kappa mechanism; V777 Herculis stars by convective blocking. Space missions A number of past, present and future spacecraft have asteroseismology studies as a significant part of their missions (order chronological). WIRE – A NASA satellite launched in 1999. A failed large infrared telescope, the two-inch aperture star tracker was used for more than a decade as a bright-star asteroseismology instrument. Re-entered Earth's atmosphere 2011. MOST – A Canadian satellite launched in 2003. The first spacecraft dedicated to asteroseismology. CoRoT – A French led ESA planet-finder and asteroseismology satellite launched in 2006. Kepler space telescope – A NASA planet-finder spacecraft launched in 2009, repurposed as K2 since the failure of a second reaction wheel prevented the telescope from continuing to monitor the same field. BRITE – A constellation of nanosatellites used to study the brightest oscillating stars. First two satellites launched Feb 25, 2013. TESS – A NASA planet-finder that will survey bright stars across most of the sky launched in 2018. PLATO – A planned ESA mission that will specifically exploit asteroseismology to obtain accurate masses and radii of transiting planets.
Physical sciences
Stellar astronomy
Astronomy
402048
https://en.wikipedia.org/wiki/Closed%20system
Closed system
A closed system is a natural physical system that does not allow transfer of matter in or out of the system, althoughin the contexts of physics, chemistry, engineering, etc.the transfer of energy (e.g. as work or heat) is allowed. Physics In classical mechanics In nonrelativistic classical mechanics, a closed system is a physical system that does not exchange any matter with its surroundings, and is not subject to any net force whose source is external to the system. A closed system in classical mechanics would be equivalent to an isolated system in thermodynamics. Closed systems are often used to limit the factors that can affect the results of a specific problem or experiment. In thermodynamics In thermodynamics, a closed system can exchange energy (as heat or work) but not matter, with its surroundings. An isolated system cannot exchange any heat, work, or matter with the surroundings, while an open system can exchange energy and matter. (This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is used here.) For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. However, for systems which are undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: where is the number of j-type molecules, is the number of atoms of element in molecule and is the total number of atoms of element in the system, which remains constant, since the system is closed. There will be one such equation for each different element in the system. In thermodynamics, a closed system is important for solving complicated thermodynamic problems. It allows the elimination of some external factors that could alter the results of the experiment or problem thus simplifying it. A closed system can also be used in situations where thermodynamic equilibrium is required to simplify the situation. In quantum physics This equation, called Schrödinger's equation, describes the behavior of an isolated or closed quantum system, that is, by definition, a system which does not interchange information (i.e. energy and/or matter) with another system. So if an isolated system is in some pure state |ψ(t) ∈ H at time t, where H denotes the Hilbert space of the system, the time evolution of this state (between two consecutive measurements). where is the imaginary unit, is the Planck constant divided by , the symbol indicates a partial derivative with respect to time , (the Greek letter psi) is the wave function of the quantum system, and is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation). In chemistry In chemistry, a closed system is where no reactants or products can escape, only heat can be exchanged freely (e.g. an ice cooler). A closed system can be used when conducting chemical experiments where temperature is not a factor (i.e. reaching thermal equilibrium). In engineering In an engineering context, a closed system is a bound system, i.e. defined, in which every input is known and every resultant is known (or can be known) within a specific time.
Physical sciences
Thermodynamics
Physics
402188
https://en.wikipedia.org/wiki/Ammonium%20bicarbonate
Ammonium bicarbonate
Ammonium bicarbonate is an inorganic compound with formula (NH4)HCO3. The compound has many names, reflecting its long history. Chemically speaking, it is the bicarbonate salt of the ammonium ion. It is a colourless solid that degrades readily to carbon dioxide, water and ammonia. Production Ammonium bicarbonate is produced by combining carbon dioxide and ammonia: CO2 + NH3 + H2O -> (NH4)HCO3 Since ammonium bicarbonate is thermally unstable, the reaction solution is kept cold, which allows the precipitation of the product as white solid. About 100,000 tons were produced in this way in 1997. Ammonia gas passed into a strong aqueous solution of the sesquicarbonate (a 2:1:1 mixture of (NH4)HCO3, (NH4)2CO3, and H2O) converts it into normal ammonium carbonate ((NH4)2CO3), which can be obtained in the crystalline condition from a solution prepared at about 30 °C. This compound on exposure to air gives off ammonia and reverts to ammonium bicarbonate. Salt of hartshorn Compositions containing ammonium carbonate have long been known. They were once produced commercially, formerly known as sal volatile or salt of hartshorn. It was obtained by the dry distillation of nitrogenous organic matter such as hair, horn, leather. In addition to ammonium bicarbonate, this material contains ammonium carbamate (NH4CO2NH2), and ammonium carbonate ((NH4)2CO3). It is sometimes called ammonium sesquicarbonate. It possesses a strong ammoniacal smell, and on digestion with alcohol, the carbamate is dissolved leaving a residue of ammonium bicarbonate. A similar decomposition takes place when the sesquicarbonate is exposed to air. Uses Ammonium bicarbonate is used in the food industry as a leavening agent for flat baked goods, such as cookies and crackers. It was commonly used in the home before modern-day baking powder was made available. Many baking cookbooks, especially from Scandinavian countries, may still refer to it as hartshorn or hornsalt, while it is known as "hirvensarvisuola" in Finnish, "hjortetakksalt" in Norwegian, "hjortetakssalt" in Danish, "hjorthornssalt" in Swedish, and "Hirschhornsalz" in German (lit., "salt of hart's horn"). Although there is a slight smell of ammonia during baking, this quickly dissipates, leaving no taste. It is used in, for example, Swedish "drömmar" biscuits and Danish "klejner" Christmas biscuits, and German Lebkuchen. In many cases it may be replaced with baking soda or baking powder, or a combination of both, depending on the recipe composition and leavening requirements. Compared to baking soda or potash, hartshorn has the advantage of producing more gas for the same amount of agent, and of not leaving any salty or soapy taste in the finished product, as it completely decomposes into water and gaseous products that evaporate during baking. It cannot be used for moist, bulky baked goods however, such as normal bread or cakes, since some ammonia will be trapped inside and will cause an unpleasant taste. It has been assigned E number E503 for use as a food additive in the European Union. It is commonly used as an inexpensive nitrogen fertilizer in China, but is now being phased out in favor of urea for quality and stability. This compound is used as a component in the production of fire-extinguishing compounds, pharmaceuticals, dyes, pigments, and it is also a basic fertilizer, being a source of ammonia. Ammonium bicarbonate is still widely used in the plastics and rubber industry, in the manufacture of ceramics, in chrome leather tanning, and for the synthesis of catalysts. It is also used for buffering solutions to make them slightly alkaline during chemical purification, such as high-performance liquid chromatography. Because it entirely decomposes to volatile compounds, this allows rapid recovery of the compound of interest by freeze-drying. Relatedly it is also useful as an alkaline buffering agent for analytical LC–MS as its volatility allows it to be rapidly removed automatically from the sample stream in the low pressure spray chambers used by many standard mass spectrometry detectors found at the end of typical LC-MS systems, such as electrospray ionization detectors. This is critical as most mass spectrometry detectors become signal saturated or even damaged with more than a trace amount of ions entering the detector proper at any one time. This issue limits buffering agents and other additives in LC-MS buffers to either extremely trace concentrations or to fairly volatile compounds. In pH ranges from about 7 to 9, ammonium bicarbonate is one of the only options available as the primary buffering agent for most LC-MS buffers. Ammonium bicarbonate is also a key component of the expectorant cough syrup "Senega and Ammonia". It's also used as an attractant for catching insect such as walnut husk fly (Rhagoletis completa). Reactions It dissolves in water to give a mildly alkaline solution. It is insoluble in acetone and alcohols. Ammonium bicarbonate decomposes above about 36 °C into ammonia, carbon dioxide, and water in an endothermic process and so causes a drop in the temperature of the water: NH4HCO3 -> NH3 + H2O + CO2 When treated with acids, ammonium salts are also produced: NH4HCO3 + HCl -> NH4Cl + CO2 + H2O Reaction with base produces ammonia. It reacts with sulfates of alkaline-earth metals precipitating their carbonates: CaSO4 + 2 NH4HCO3 -> CaCO3 + (NH4)2SO4 + CO2 + H2O It also reacts with alkali metal halides, giving alkali metal bicarbonate and ammonium halide: NH4HCO3 + NaCl -> NH4Cl + NaHCO3 NH4HCO3 + KI -> NH4I + KHCO3 NH4HCO3 + NaBr -> NH4Br + NaHCO3 Natural occurrence The compound occurs in nature as an exceedingly rare mineral teschemacherite. It can also be obtained from deer antlers. Safety Ammonium bicarbonate is an irritant to the skin, eyes and respiratory system. Short-term health effects may occur immediately or shortly after exposure to ammonium bicarbonate. Breathing ammonium bicarbonate can irritate the nose, throat and lungs causing coughing, wheezing and/or shortness of breath. Repeated exposure may cause bronchitis to develop with cough, and/or shortness of breath. Health effects can occur some time after exposure to ammonium bicarbonate and can last for months or years. Where possible, operations should be enclosed and the use of local exhaust ventilation at the site of chemical release is recommended. If local exhaust ventilation or enclosure is not used, respirators are necessary. Wear protective work clothing and change clothes and wash thoroughly immediately after exposure to ammonium bicarbonate. Ammonium bicarbonate from China used to make cookies was found to be contaminated with melamine, and imports were banned in Malaysia following the 2008 Chinese milk scandal.
Physical sciences
Carbonic oxyanions
Chemistry
402652
https://en.wikipedia.org/wiki/Hoarding%20disorder
Hoarding disorder
Hoarding disorder (HD) or Plyushkin's disorder is a mental disorder characterised by persistent difficulty in parting with possessions and engaging in excessive acquisition of items that are not needed or for which no space is available. This results in severely cluttered living spaces, distress, and impairment in personal, family, social, educational, occupational, or other important areas of functioning. Excessive acquisition is characterized by repetitive urges or behaviours related to amassing or buying property. Difficulty discarding possessions is characterized by a perceived need to save items and distress associated with discarding them. Accumulation of possessions results in living spaces becoming cluttered to the point that their use or safety is compromised. It is recognised by the eleventh revision of the International Classification of Diseases (ICD-11) and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5). Prevalence rates are estimated at 2% to 5% in adults, though the condition typically manifests in childhood with symptoms worsening in advanced age, at which point collected items have grown excessive and family members who would otherwise help to maintain and control the levels of clutter have either died or moved away. People with hoarding disorder commonly live with other complex and/or psychological disorders such as depression, anxiety, obsessive compulsive disorder (OCD), autism spectrum disorder (ASD), and/or attention deficit hyperactivity disorder (ADHD). Other factors often associated with hoarding include alcohol dependence and paranoid, schizotypal and avoidant traits. Diagnosis Collecting and hoarding may seem similar, but there are distinct characteristics that set the behaviors apart. Collecting is a hobby often involving the targeted search and acquisition of specific items that form—at least from the perspective of the collector—a greater appreciation, deeper understanding, or increased synergistic value when combined with other similar items. Hoarding, by contrast, typically appears haphazard and involves the overall acquiring of common items that would not be especially meaningful to the person who is gathering such items in large quantities. People who hoard keep common items that hold little to no meaning or value to others, unlike some collectors, whose items may be of great value to select people. Most hoarders are disorganized, and their living areas are crowded and in disarray. Most collectors can afford to store their items systematically or to have enough room to display their collections. Age, mental state, or finances have caused some collectors to fall into a hoarding state. The Clutter Image Rating A UK charity called Hoarding UK has found that people have very different ideas about what it means to have a cluttered home. For some, a small pile of things in the corner of an otherwise well-ordered room constitutes serious clutter. For others, only when the narrow pathways make it hard to get through a room does the clutter register. To ensure an accurate sense of a clutter problem and encourage people to get support, Hoarding UK uses the Clutter Image Rating, created by R. O. Frost and G. Steketee, a series of pictures of rooms in various stages of clutter – from completely clutter-free to very severely cluttered. Epidemiology The prevalence of hoarding disorder is estimated to be between 2 and 6 percent, although some surveys indicate the lifetime prevalence may be as high as 14%. First-degree relatives of those with hoarding disorder are significantly more likely to report hoarding symptoms, and hoarding likely comes about due to a combination of genetic and environmental factors. Rates of hoarding increase significantly with age, and people over the age of 54 are three times as likely to meet criteria for hoarding disorder. However, hoarding symptoms typically manifest in early childhood, and worsen to the point of becoming clinically significant during middle age. Over half of hoarders report the onset of hoarding as being associated with a traumatic life event, and in this portion of hoarders, the age of onset is much higher. Epidemiological studies have found that hoarding is twice as common in males, although clinical studies on hoarding tend to be predominantly female, suggesting that male hoarders are a significantly understudied and under-treated population. Hoarding is a significant problem around the world and can pose a public health risk when hoarding escalates enough to damage the integrity of a structure or attract vermin. Accumulated items can block exits during fires and increase the risk of injury. In Japan, hoarder houses are known as "garbage mansions" (ごみ屋敷, gomi yashiki), and have become a topic of public alarm in Japanese mass media. In the Eastern United States, they are sometimes called Collyer mansions or Collyers, after the infamous Collyer brothers. Comorbidity Under the DSM-IV, hoarding was listed as a symptom of obsessive–compulsive personality disorder and obsessive–compulsive disorder; however, hoarding was found to have a relatively weak connection to OCD or OCPD compared to their other symptoms. Due to this evidence, hoarding disorder was separated as its own disorder in the DSM-5. However, hoarding does frequently co-occur with OCD. OCD patients with hoarding symptoms were found to display a distinct form of hoarding in which they were more likely to hoard "bizarre items" and perform compulsive rituals associated with their hoarding behavior, such as rituals around checking items or rituals to be performed before discarding them. However, the majority of hoarders do not show OCD symptoms. Hoarding has been found to be correlated with depression, social anxiety, compulsive grooming disorders such as trichotillomania, bipolar disorder, reduced cognitive and affective empathy and compulsive shopping. Hoarders have higher than average rates of traumatic past events, particularly those associated with loss or deprivation. Past events which occurred before the onset of hoarding are correlated to a subject's emotional attachment to physical objects, and past events after the onset of hoarding increase a subject's anxiety around memory. Hoarders are also more likely to have a past with alcohol abuse. The prevalence of different comorbidities is influenced by gender. In men, hoarding is associated with generalized anxiety disorder and tics, while among women, hoarding is associated with social phobia, post-traumatic stress disorder, body dysmorphic disorder, and compulsive grooming behaviors like nail-biting and skin-picking. Studies In a 2010 study using data from self-reports of hoarding behavior from 751 participants, it was found most reported the onset of their hoarding symptoms between the ages of 11 and 20 years old, with 70% reporting the behaviors before the age of 21. Fewer than 4% of people reported the onset of their symptoms after the age of 40. The data showed that compulsive hoarding usually begins early, but often does not become more prominent until after age 40. Different reasons have been given for this, such as the effects of family presence earlier in life and limits on hoarding imposed by housing situation and lifestyle. The understanding of early onset hoarding behavior may help in the future to better distinguish hoarding behavior from "normal" childhood collecting behaviors. A second key part of this study was to determine if stressful life events are linked to the onset of hoarding symptoms. Similar to self-harming, traumatized persons may create a problem for themselves in order to avoid their real anxiety or trauma. Facing their real issues may be too difficult for them, so they create an artificial problem (in their case, hoarding) and prefer to battle with it rather than determine, face, or do something about their real anxieties. Hoarders may suppress their psychological pain by hoarding. The study shows that adults who hoard report a greater lifetime incidence of having possessions taken by force, forced sexual activity as either an adult or a child, including forced sexual intercourse, and being physically handled roughly during childhood, thus proving traumatic events are positively correlated with the severity of hoarding. For each five years of life the participant would rate the severity of their hoarding symptoms from 1 to 4, 4 being the most severe. Of the participants, 548 reported a chronic course, 159 an increasing course and 39 people, a decreasing course of illness. The incidents of increased hoarding behavior were usually correlated to five categories of stressful life events. Although excessive acquiring is not a diagnostic criterion of hoarding, at least two-thirds of individuals with hoarding disorder excessively acquire possessions. Having a more anxiously attached interpersonal style is associated with more compulsive buying and greater acquisition of free items and these relationships are mediated by stronger distress intolerance and greater anthropomorphism. Anthropomorphism has been shown to increase both the sentimental value and perceived utility of items. These findings indicate that individuals may over-value their possessions to compensate for thwarted interpersonal needs. Feeling alone and/or disconnected from others may impair people's ability to tolerate distress and increase people's tendencies to see human-like qualities in objects. The humanness of items may increase their perceived value and individuals may acquire these valued objects to alleviate distress. Individuals with hoarding problems have been shown to have greater interpersonal problems than individuals who only excessively acquire possessions, which provides some support for the assumption that individuals with hoarding problems may have a stronger motivation to hang onto possessions for support. As possessions cannot provide support in the way humans can and because saving excessively can frustrate other people due to its impact on their quality of life, individuals with hoarding disorder may be caught in a feedback loop. They may save to alleviate distress, but this saving may cause distress, which may lead them to keep saving to alleviate the distress. Treatment Cognitive-behavioral therapy (CBT) is a commonly implemented therapeutic intervention for compulsive hoarding. As part of cognitive behavior therapy, the therapist may help the patient to: Discover why one is compelled to hoard. Learn to organize possessions in order to decide what to discard. Develop decision-making skills. Declutter the home during in-home visits by a therapist or professional organizer. Gain and perform relaxation skills. Attend family and/or group therapy. Be open to trying psychiatric hospitalization if the hoarding is serious. Have periodic visits and consultations to keep a healthy lifestyle. This modality of treatment usually involves exposure and response prevention to situations that cause anxiety and cognitive restructuring of beliefs related to hoarding. Furthermore, research has also shown that certain CBT protocols have been more effective in treatment than others. CBT programs that specifically address the motivation of the affected person, organization, acquiring new clutter, and removing current clutter from the home have shown promising results. This type of treatment typically involves in-home work with a therapist combined with between-session homework, the completion of which is associated with better treatment outcomes. Research on internet-based CBT treatments for the disorder (where participants have access to educational resources, cognitive strategies, and chat groups) has also shown promising results both in terms of short- and long-term recovery. Other therapeutic approaches that have been found to be helpful: Motivational interviewing originated in addiction therapy. This method is significantly helpful when used in hoarding cases in which insight is poor and ambivalence to change is marked. Harm reduction rather than symptom reduction. Also borrowed from addiction therapy. The goal is to decrease the harmful implications of the behavior, rather than the hoarding behaviors. Group psychotherapy reduces social isolation and social anxiety and is cost-effective compared to one-on-one intervention. Group CBT tends to have similar outcomes to individual therapy. Although group treatment often does not include home sessions, experimental research suggests that treatment outcomes may be improved if home sessions are included. Individuals have been shown to discard more possessions when in a cluttered environment compared to a tidy environment. Indeed, a meta-analysis found that a greater number of home sessions improves CBT outcomes. Individuals with hoarding behaviors are often described as having low motivation and poor compliance levels, and as being indecisive and procrastinators, which may frequently lead to premature termination (i.e., dropout) or low response to treatment. Therefore, it was suggested that future treatment approaches, and pharmacotherapy in particular, be directed to address the underlying mechanisms of cognitive impairments demonstrated by individuals with hoarding symptoms. Mental health professionals frequently express frustration regarding hoarding cases, mostly due to premature termination and poor response to treatment. Patients are frequently described as indecisive, procrastinators, recalcitrant, and as having low or no motivation, which can explain why many interventions fail to accomplish significant results. To overcome this obstacle, some clinicians recommend accompanying individual therapy with home visits to help the clinician: Likewise, certain cases are assisted by professional organizers as well. In popular culture Emily Maguire wrote Love Objects in 2021, a novel about a woman with hoarding disorder that focused on the behavior and the consequences of a hoarder being exposed. There have been several television shows that focused on those suspected to have hoarding disorder. Hoarders, an ongoing series by A&E, focuses on helping one or two individual "hoarders" per episode and features a rotating cast of professional psychologists and organizers who specialize in hoarding disorder. A similar show, Hoarding: Buried Alive ran from 2010 to 2014 on TLC. Hoarders: Canada followed a similar format to Hoarders and Hoarding: Buried Alive. Britain's Biggest Hoarders is an ongoing series hosted by Jasmine Harman, the daughter of a hoarder, and follows her as she and a team of experts seek to help others with the disorder. The Hoarder Next Door is a four-part series based in Britain that followed a group of hoarders participating in a treatment program led by psychotherapist Stelios Kiosses. Confessions: Animal Hoarding is a six-episode series aired on Animal Planet that focused on those who hoard animals and their living conditions. Hoarder House Flippers is more focused on the hoarded house, where teams work hard to flip properties that have been hoarded. There have been possible depictions of hoarding in literature before the diagnosis was created. In Nikolai Gogol’s book Dead Souls (1842), wealthy Plyushkin displays hoarding behaviors. For example, he serves an old cake from years ago to a business partner, having a servant scrape off the mold. He is famous among the locals for his compulsion to find and keep items. Le Cousin Pons, a novella written by Honoré de Balzac in 1846, features Pons, who hoards art and antiques. He collected relatively low-value items, hoping they would become more valuable with time. However, he is unwilling to part with any of his items even when he becomes destitute. He dies with his collection intact. In Charles Dickens's Bleak House (1862), London shop owner Krook hoards items, primarily legal documents. He continues to buy items but doesn't sell any, even though he claims he buys to sell later for a profit. Several documents that would resolve a legal case central to the novel's plot are lost among his hoard.
Biology and health sciences
Mental disorders
Health
402681
https://en.wikipedia.org/wiki/Aluminum%20can
Aluminum can
An aluminum can (British English: aluminium can) is a single-use container for packaging made primarily of an aluminum exterior with an epoxy resin or polymer coated interior. It is commonly used for food and beverages such as olives and soup but also for products such as oil, chemicals, and other liquids. Global production is 180 billion annually and constitutes the largest single use of aluminum globally. Usage Use of aluminum in cans began in 1957. Aluminum offers greater malleability, resulting in ease of manufacture; this gave rise to the two-piece can, where all but the top of the can is simply stamped out of a single piece of aluminum, rather than constructed from two pieces of steel. The inside of the can is lined by spray coating an epoxy lacquer or polymer to protect the aluminum from being corroded by acidic contents such as carbonated beverages and imparting a metallic taste to the beverage. The epoxy may contain bisphenol A. A label is either printed directly on the side of the can or will be glued to the outside of the curved surface, indicating its contents. Most aluminum cans are made of two pieces. The bottom and body are "drawn" or "drawn and ironed" from a flat plate or shallow cup. After filling, the can "end" is sealed onto the top of the can. This is supplemented by a sealing compound to ensure that the top is air tight. The advantages of aluminum over steel (tinplate) cans include; light weight competitive cost usage of easy-open aluminum ends: no need for a can opener clean appearance aluminum does not rust easy to press into shape The easy-open aluminum end for beverage cans was developed by Alcoa in 1962 for the Pittsburgh Brewing Company and is now used in nearly all of the canned beer market. Recycling Aluminum cans can be made with recycled aluminum. In 2017, 3.8 million tons of aluminum were generated in the US of which 0.62 million tons were recycled - a recycling rate of 16%. According to estimates from the Aluminum Association, a large amount of aluminium remains unrecycled in the US, where roughly $700 million worth of cans end up in landfills each year. In 2012, 92% of the aluminum beverage cans sold in Switzerland were recycled. Cans are the most recycled beverage container, at a rate of 69% worldwide. One issue is that the top of the can is made from a blend of aluminum and magnesium to increase its strength. When the can is melted for recycling, the mixture is unsuitable for either the top or the bottom/side. Instead of mixing recycled metal with more aluminum (to soften it) or magnesium (to harden it), a new approach uses annealing to produce an alloy that works for both. The aluminum can is also considered the most valuable recyclable material in an average recycling bin. It is estimated that Americans throw away nearly 1 billion dollars a year in wasted aluminum. The aluminum industry pays nearly 800 million dollars a year for recycled aluminum since it is so versatile. Because of the advantages of aluminium packaging (shelf life, durability, food grade factor) over plastics, it is considered an alternative to PET bottles, with the possibility of replacing the majority of them in the next decades. Cans as collectibles Some people collect cans as a hobby. Can collections can be exclusive to one sector only, eg., some collectors may collect soda cans only, while others may dedicate themselves to collecting beer cans or oil cans exclusively, but some collectors may collect cans regardless of the type of can. One aspect that may make someone interested in building a can collection as a hobby is the variety of cans available worldwide promoting such things as films, musical albums and tours, sporting teams and events, countries, ideals and even some non-food or petrol-oriented brands and companies. Celebrities can also be featured on collectible cans; such was the case of tennis player Andre Agassi, who had a set of four Pepsi Max soda cans dedicated to him in 1996. Davide Andreani of Italy is in the Guinness Book of World Records for having the largest collection of soda cans of one specific brand in the world, with over 20,000 cans in his collection. According to a website named canmuseum.com, the largest collection of Pepsi Cola cans belongs to Chris Cavaletti, also of Italy, who owned 12,402 Pepsi Cola cans from 81 countries as of 2022, while the largest collection of Coca-Cola soda cans belonged to Gary Feng of Canada with 11,308 variations of the Coca-Cola cans from 108 countries collected, with William B. Christensen of the United States owning the largest collection of beer cans with 75,000 from 125 countries and Allan Green, of the United States also, with the largest collection of wine cans, at 449. Some webpages are dedicated to the hobby of can collecting. Gallery Images A video
Technology
Containers
null
402999
https://en.wikipedia.org/wiki/Charolais%20cattle
Charolais cattle
The Charolais () or Charolaise () is a French breed of taurine beef cattle. It originates in, and is named for, the Charolais area surrounding Charolles, in the Saône-et-Loire department, in the Bourgogne-Franche-Comté region of eastern France. Charolais are raised for meat; they may be crossed with other breeds, including Angus and Hereford cattle. History The Charolais is the second-most numerous cattle breed in France after the Holstein Friesian and is the most common beef breed in that country, ahead of the Limousin. At the end of 2014, France had 4.22 million head of Charolais, including 1.56 million cows, down 0.6% from a year earlier. The Charolais is a world breed: it is reported to DAD-IS by 68 countries, of which 37 report population data. The world population is estimated at 730,000. The largest populations are reported from the Czech Republic and Mexico. The breed was introduced to the southern United States from Mexico in 1934. As the cradle of the Charolais cattle, the Charolais-Brionnais Country is applicant for the UNESCO's label as a World Heritage Site to preserve, consolidate and transmit this resource. Characteristics It is among the heaviest of cattle breeds: bulls weigh from , and cows from . The coat ranges from white to cream-colored; the nose is uniformly pink. The Charbray, a cross-breed with Brahman cattle, is recognized as a breed in some countries. The Brazilian Chicana is a composite breed with 5/8 Charolais and 3/8 Indu-Brasil. Other derived breeds include Charford and Char-Swiss in the United States.
Biology and health sciences
Cattle
Animals
403165
https://en.wikipedia.org/wiki/Maximum%20flow%20problem
Maximum flow problem
In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate. The maximum flow problem can be seen as a special case of more complex network flow problems, such as the circulation problem. The maximum value of an s-t flow (i.e., flow from source s to sink t) is equal to the minimum capacity of an s-t cut (i.e., cut severing s from t) in the network, as stated in the max-flow min-cut theorem. History The maximum flow problem was first formulated in 1954 by T. E. Harris and F. S. Ross as a simplified model of Soviet railway traffic flow. In 1955, Lester R. Ford, Jr. and Delbert R. Fulkerson created the first known algorithm, the Ford–Fulkerson algorithm. In their 1955 paper, Ford and Fulkerson wrote that the problem of Harris and Ross is formulated as follows (see p. 5):Consider a rail network connecting two cities by way of a number of intermediate cities, where each link of the network has a number assigned to it representing its capacity. Assuming a steady state condition, find a maximal flow from one given city to the other.In their book Flows in Networks, in 1962, Ford and Fulkerson wrote:It was posed to the authors in the spring of 1955 by T. E. Harris, who, in conjunction with General F. S. Ross (Ret.), had formulated a simplified model of railway traffic flow, and pinpointed this particular problem as the central one suggested by the model [11].where [11] refers to the 1955 secret report Fundamentals of a Method for Evaluating Rail net Capacities by Harris and Ross (see p. 5). Over the years, various improved solutions to the maximum flow problem were discovered, notably the shortest augmenting path algorithm of Edmonds and Karp and independently Dinitz; the blocking flow algorithm of Dinitz; the push-relabel algorithm of Goldberg and Tarjan; and the binary blocking flow algorithm of Goldberg and Rao. The algorithms of Sherman and Kelner, Lee, Orecchia and Sidford, respectively, find an approximately optimal maximum flow but only work in undirected graphs. In 2013 James B. Orlin published a paper describing an algorithm. In 2022 Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva published an almost-linear time algorithm running in for the minimum-cost flow problem of which for the maximum flow problem is a particular case. For the single-source shortest path (SSSP) problem with negative weights another particular case of minimum-cost flow problem an algorithm in almost-linear time has also been reported. Both algorithms were deemed best papers at the 2022 Symposium on Foundations of Computer Science. Definition First we establish some notation: Let be a network with being the source and the sink of respectively. If is a function on the edges of then its value on is denoted by or Definition. The capacity of an edge is the maximum amount of flow that can pass through an edge. Formally it is a map Definition. A flow is a map that satisfies the following: Capacity constraint. The flow of an edge cannot exceed its capacity, in other words: for all Conservation of flows. The sum of the flows entering a node must equal the sum of the flows exiting that node, except for the source and the sink. Or: Remark. Flows are skew symmetric: for all Definition. The value of flow is the amount of flow passing from the source to the sink. Formally for a flow it is given by: Definition. The maximum flow problem is to route as much flow as possible from the source to the sink, in other words find the flow with maximum value. Note that several maximum flows may exist, and if arbitrary real (or even arbitrary rational) values of flow are permitted (instead of just integers), there is either exactly one maximum flow, or infinitely many, since there are infinitely many linear combinations of the base maximum flows. In other words, if we send units of flow on edge in one maximum flow, and units of flow on in another maximum flow, then for each we can send units on and route the flow on remaining edges accordingly, to obtain another maximum flow. If flow values can be any real or rational numbers, then there are infinitely many such values for each pair . Algorithms The following table lists algorithms for solving the maximum flow problem. Here, and denote the number of vertices and edges of the network. The value refers to the largest edge capacity after rescaling all capacities to integer values (if the network contains irrational capacities, may be infinite). For additional algorithms, see . Integral flow theorem The integral flow theorem states that If each edge in a flow network has integral capacity, then there exists an integral maximal flow. The claim is not only that the value of the flow is an integer, which follows directly from the max-flow min-cut theorem, but that the flow on every edge is integral. This is crucial for many combinatorial applications (see below), where the flow across an edge may encode whether the item corresponding to that edge is to be included in the set sought or not. Application Multi-source multi-sink maximum flow problem Given a network with a set of sources and a set of sinks instead of only one source and one sink, we are to find the maximum flow across . We can transform the multi-source multi-sink problem into a maximum flow problem by adding a consolidated source connecting to each vertex in and a consolidated sink connected by each vertex in (also known as supersource and supersink) with infinite capacity on each edge (See Fig. 4.1.1.). Maximum cardinality bipartite matching Given a bipartite graph , we are to find a maximum cardinality matching in , that is a matching that contains the largest possible number of edges. This problem can be transformed into a maximum flow problem by constructing a network , where contains the edges in directed from to . for each and for each . for each (See Fig. 4.3.1). Then the value of the maximum flow in is equal to the size of the maximum matching in , and a maximum cardinality matching can be found by taking those edges that have flow in an integral max-flow. Minimum path cover in directed acyclic graph Given a directed acyclic graph , we are to find the minimum number of vertex-disjoint paths to cover each vertex in . We can construct a bipartite graph from , where . Then it can be shown that has a matching of size if and only if has a vertex-disjoint path cover containing edges and paths, where is the number of vertices in . Therefore, the problem can be solved by finding the maximum cardinality matching in instead. Assume we have found a matching of , and constructed the cover from it. Intuitively, if two vertices are matched in , then the edge is contained in . Clearly the number of edges in is . To see that is vertex-disjoint, consider the following: Each vertex in can either be non-matched in , in which case there are no edges leaving in ; or it can be matched, in which case there is exactly one edge leaving in . In either case, no more than one edge leaves any vertex in . Similarly for each vertex in – if it is matched, there is a single incoming edge into in ; otherwise has no incoming edges in . Thus no vertex has two incoming or two outgoing edges in , which means all paths in are vertex-disjoint. To show that the cover has size , we start with an empty cover and build it incrementally. To add a vertex to the cover, we can either add it to an existing path, or create a new path of length zero starting at that vertex. The former case is applicable whenever either and some path in the cover starts at , or and some path ends at . The latter case is always applicable. In the former case, the total number of edges in the cover is increased by 1 and the number of paths stays the same; in the latter case the number of paths is increased and the number of edges stays the same. It is now clear that after covering all vertices, the sum of the number of paths and edges in the cover is . Therefore, if the number of edges in the cover is , the number of paths is . Maximum flow with vertex capacities Let be a network. Suppose there is capacity at each node in addition to edge capacity, that is, a mapping such that the flow has to satisfy not only the capacity constraint and the conservation of flows, but also the vertex capacity constraint In other words, the amount of flow passing through a vertex cannot exceed its capacity. To find the maximum flow across , we can transform the problem into the maximum flow problem in the original sense by expanding . First, each is replaced by and , where is connected by edges going into and is connected to edges coming out from , then assign capacity to the edge connecting and (see Fig. 4.4.1). In this expanded network, the vertex capacity constraint is removed and therefore the problem can be treated as the original maximum flow problem. Maximum number of paths from s to t Given a directed graph and two vertices and , we are to find the maximum number of paths from to . This problem has several variants: 1. The paths must be edge-disjoint. This problem can be transformed to a maximum flow problem by constructing a network from , with and being the source and the sink of respectively, and assigning each edge a capacity of . In this network, the maximum flow is iff there are edge-disjoint paths. 2. The paths must be independent, i.e., vertex-disjoint (except for and ). We can construct a network from with vertex capacities, where the capacities of all vertices and all edges are . Then the value of the maximum flow is equal to the maximum number of independent paths from to . 3. In addition to the paths being edge-disjoint and/or vertex disjoint, the paths also have a length constraint: we count only paths whose length is exactly , or at most . Most variants of this problem are NP-complete, except for small values of . Closure problem A closure of a directed graph is a set of vertices C, such that no edges leave C. The closure problem is the task of finding the maximum-weight or minimum-weight closure in a vertex-weighted directed graph. It may be solved in polynomial time using a reduction to the maximum flow problem. Real world applications Baseball elimination In the baseball elimination problem there are n teams competing in a league. At a specific stage of the league season, wi is the number of wins and ri is the number of games left to play for team i and rij is the number of games left against team j. A team is eliminated if it has no chance to finish the season in the first place. The task of the baseball elimination problem is to determine which teams are eliminated at each point during the season. Schwartz proposed a method which reduces this problem to maximum network flow. In this method a network is created to determine whether team k is eliminated. Let G = (V, E) be a network with being the source and the sink respectively. One adds a game nodeij – which represents the number of plays between these two teams. We also add a team node for each team and connect each game node with i < j to V, and connects each of them from s by an edge with capacity rij – which represents the number of plays between these two teams. We also add a team node for each team and connect each game node with two team nodes i and j to ensure one of them wins. One does not need to restrict the flow value on these edges. Finally, edges are made from team node i to the sink node t and the capacity of is set to prevent team i from winning more than . Let S be the set of all teams participating in the league and let . In this method it is claimed team k is not eliminated if and only if a flow value of size r(S − {k}) exists in network G. In the mentioned article it is proved that this flow value is the maximum flow value from s to t. Airline scheduling In the airline industry a major problem is the scheduling of the flight crews. The airline scheduling problem can be considered as an application of extended maximum network flow. The input of this problem is a set of flights F which contains the information about where and when each flight departs and arrives. In one version of airline scheduling the goal is to produce a feasible schedule with at most k crews. To solve this problem one uses a variation of the circulation problem called bounded circulation which is the generalization of network flow problems, with the added constraint of a lower bound on edge flows. Let G = (V, E) be a network with as the source and the sink nodes. For the source and destination of every flight i, one adds two nodes to V, node si as the source and node di as the destination node of flight i. One also adds the following edges to E: An edge with capacity [0, 1] between s and each si. An edge with capacity [0, 1] between each di and t. An edge with capacity [1, 1] between each pair of si and di. An edge with capacity [0, 1] between each di and sj, if source sj is reachable with a reasonable amount of time and cost from the destination of flight i. An edge with capacity [0, ∞] between s and t. In the mentioned method, it is claimed and proved that finding a flow value of k in G between s and t is equal to finding a feasible schedule for flight set F with at most k crews. Another version of airline scheduling is finding the minimum needed crews to perform all the flights. To find an answer to this problem, a bipartite graph is created where each flight has a copy in set A and set B. If the same plane can perform flight j after flight i, is connected to . A matching in induces a schedule for F and obviously maximum bipartite matching in this graph produces an airline schedule with minimum number of crews. As it is mentioned in the Application part of this article, the maximum cardinality bipartite matching is an application of maximum flow problem. Circulation–demand problem There are some factories that produce goods and some villages where the goods have to be delivered. They are connected by a networks of roads with each road having a capacity for maximum goods that can flow through it. The problem is to find if there is a circulation that satisfies the demand. This problem can be transformed into a maximum-flow problem. Add a source node and add edges from it to every factory node with capacity where is the production rate of factory . Add a sink node and add edges from all villages to with capacity where is the demand rate of village . Let G = (V, E) be this new network. There exists a circulation that satisfies the demand if and only if : . If there exists a circulation, looking at the max-flow solution would give the answer as to how much goods have to be sent on a particular road for satisfying the demands. The problem can be extended by adding a lower bound on the flow on some edges. Image segmentation In their book, Kleinberg and Tardos present an algorithm for segmenting an image. They present an algorithm to find the background and the foreground in an image. More precisely, the algorithm takes a bitmap as an input modelled as follows: ai ≥ 0 is the likelihood that pixel i belongs to the foreground, bi ≥ 0 in the likelihood that pixel i belongs to the background, and pij is the penalty if two adjacent pixels i and j are placed one in the foreground and the other in the background. The goal is to find a partition (A, B) of the set of pixels that maximize the following quantity , Indeed, for pixels in A (considered as the foreground), we gain ai; for all pixels in B (considered as the background), we gain bi. On the border, between two adjacent pixels i and j, we loose pij. It is equivalent to minimize the quantity because We now construct the network whose nodes are the pixel, plus a source and a sink, see Figure on the right. We connect the source to pixel i by an edge of weight ai. We connect the pixel i to the sink by an edge of weight bi. We connect pixel i to pixel j with weight pij. Now, it remains to compute a minimum cut in that network (or equivalently a maximum flow). The last figure shows a minimum cut. Extensions 1. In the minimum-cost flow problem, each edge (u,v) also has a cost-coefficient auv in addition to its capacity. If the flow through the edge is fuv, then the total cost is auvfuv. It is required to find a flow of a given size d, with the smallest cost. In most variants, the cost-coefficients may be either positive or negative. There are various polynomial-time algorithms for this problem. 2. The maximum-flow problem can be augmented by disjunctive constraints: a negative disjunctive constraint says that a certain pair of edges cannot simultaneously have a nonzero flow; a positive disjunctive constraints says that, in a certain pair of edges, at least one must have a nonzero flow. With negative constraints, the problem becomes strongly NP-hard even for simple networks. With positive constraints, the problem is polynomial if fractional flows are allowed, but may be strongly NP-hard when the flows must be integral.
Mathematics
Graph theory
null
17983947
https://en.wikipedia.org/wiki/Kelenken
Kelenken
Kelenken is a genus of phorusrhacid ("terror bird"), an extinct group of large, predatory birds, which lived in what is now Argentina in the middle Miocene about 15 million years ago. The only known specimen was discovered by high school student Guillermo Aguirre-Zabala in Comallo, in the region of Patagonia, and was made the holotype of the new genus and species Kelenken guillermoi in 2007. The genus name references a spirit in Tehuelche mythology, and the specific name honors the discoverer. The holotype consists of one of the most complete skulls known of a large phorusrhacid, as well as a tarsometatarsus lower leg bone and a phalanx toe bone. The discovery of Kelenken clarified the anatomy of large phorusrhacids, as these were previously much less well known. The closest living relatives of the phorusrhacids are the seriemas. Kelenken was found to belong in the subfamily Phorusrhacinae, along with for example Devincenzia. Phorusrhacids were large, flightless birds with long hind limbs, narrow pelvises, proportionally small wings and huge skulls, with a tall, long, sideways compressed hooked beak. Kelenken is the largest-known phorusrhacid, 10% larger than its largest relatives known previously. At long, the holotype skull is the largest known of any bird, and has been likened to the size of a horse's skull. The tarsometatarsus leg bone is long. Kelenken is thought to have been about tall and exceeded in weight. Kelenken differed from other phorusrhacids in features such as the length of its beak, in having a supraorbital ossification (a rounded edge above the eye socket) that fits into a socket of the postorbital process, and in having an almost triangular foramen magnum (the large opening at the base of the skull through which the spinal cord enters). Phorusrhacids are thought to have been ground predators or scavengers, and have often been considered apex predators that dominated Cenozoic South America in the absence of placental mammalian predators, though they did co-exist with some large, carnivorous borhyaenid mammals. The long and slender tarsometatarsus of Kelenken suggests that it could run faster than had previously been assumed for large phorusrhacids, and would have been able to chase down small animals. Studies of the related Andalgalornis show that large phorusrhacids had very rigid and stiff skulls; this indicates they may have swallowed small prey whole or targeted larger prey with repetitive strikes with the beak. Kelenken is known from the Collón Curá Formation, and lived during the Colloncuran age of South America, when open environments predominated, which allowed more cursorial (adapted for running) and large animals to occur. The formation has provided fossils of a wide range of mammals, with a few fossils of birds, reptiles, amphibians and fish. Taxonomy Around 2004, fossils of a phorusrhacid (or "terror bird", a group of large, predatory birds) were discovered by Argentine high school student Guillermo Aguirre-Zabala between two houses, about from the railroad of Comallo, a small village in the north-west of the Río Negro Province in the Patagonia region of Argentina (coordinates: ). The outcrops where the specimen was discovered belong to the Collón Curá Formation. Aguirre-Zabala prepared the specimen himself, and the discovery led him to shift from studying psychology to studying paleontology and Earth science. The specimen became part of the collection of the Museo Asociación Paleontológica Bariloche in Río Negro, where it was cataloged as specimen BAR 3877-11. Prior to the animal receiving a scientific name, the specimen was reported and discussed by the Argentine paleontologists Luis M. Chiappe and Sara Bertelli in a short 2006 article. In 2007, Bertelli, Chiappe, and Claudia Tambussi made the specimen the holotype of Kelenken guillermoi; the genus name refers to a spirit in the mythology of the Tehuelche people of Patagonia which is represented as a giant bird of prey, and the specific name honors its discoverer. The holotype and only known specimen consists of a nearly complete skull which is somewhat crushed from top to bottom, with most of the eye sockets, skull roof, braincase and left quadrate bone preserved, while most of the palatal bones behind the eye sockets are missing. The specimen also includes an associated left tarsometatarsus (lower leg bone of birds), a small upper portion of a foot phalanx bone (toe bone), and some indeterminate fragments. The describers concluded these bones belonged to a single specimen due to being collected together (and with no other fossils being present), because their general preservation (such as color and texture) was similar, and because they were morphologically consistent with belonging to a large phorusrhacid. The specimen possessed the most complete skull of a large phorusrhacid known at the time. Previously, such skulls were known only from the fragmentary Devincenzia and Phorusrhacos. The skull of the latter disintegrated during collection (leaving only the tip of the beak), which hampered comparison between phorusrhacid taxa of different sizes, until the discovery of Kelenken. Evolution In their 2007 description, Bertelli and colleagues classified Kelenken as a member of the family Phorusrhacidae, based on its enormous size, combined with its sideways compressed, strongly hooked beak (or rostrum, the part of the jaws that formed the beak), and convex culmen (the top of the upper beak). Five phorusrhacid subfamilies were recognized at the time (Brontornithinae, Phorusrhacinae, Patagornithinae, Mesembriornithinae and Psilopterinae), though their validity had not then been confirmed through cladistic analysis, and the describers found Kelenken most similar to taxa that had traditionally been considered phorusrhacines. Features shared with phorusrhacines include that the hind part of the skull is low and compressed from top to bottom, a wide occipital table, a blunt postorbital process, and a tarsometatarsus that is similar to that of Titanis in that the supratrochlear surface of the lower end is flat. Further comparison was hampered by the lack of anatomical information about phorusrhacines. The Brazilian paleontologist Herculano Alvarenga and colleagues published a phylogenetic analysis of Phorusrhacidae in 2011 that found Kelenken and Devincenzia to be sister taxa, each other's closest relatives. While the analysis supported there being five subfamilies, the resulting cladogram did not separate Brontornithinae, Phorusrhacinae and Patagornithinae. In their 2015 description of Llallawavis, the Argentinian paleontologist Federico J. Degrange and colleagues performed a phylogenetic analysis of Phorusrhacidae, wherein they found Phorusrhacinae to be polyphyletic (an unnatural grouping). The following cladogram shows the position of Kelenken following the 2015 analysis: During the early Cenozoic, after the extinction of the non-bird dinosaurs, mammals underwent an evolutionary diversification, and some bird groups around the world developed a tendency towards gigantism; this included the Gastornithidae, the Dromornithidae, the Palaeognathae and the Phorusrhacidae. Phorusrhacids are an extinct group within Cariamiformes, the only living members of which are the two species of seriemas in the family Cariamidae. While they are the most speciose group within Cariamiformes, the interrelationships between phorusrhacids are unclear due to the incompleteness of their remains. Phorusrhacids were present in South America from the Paleocene (when the continent was an isolated island) and survived until the Pleistocene. They also appeared in North America at the end of the Pliocene, during the Great American Biotic Interchange, and while fossils from Europe have been assigned to the group, their classification is disputed. It is unclear where the group originated; both cariamids and phorusrhacids may have arisen in South America, or arrived from elsewhere when southern continents were closer together or when sea levels were lower. Kelenken itself lived during the middle Miocene, about 15 million years ago. Since phorusrhacids survived until the Pleistocene, they appear to have been more successful than for example the South American metatherian thylacosmilid predators (which disappeared in the Pliocene), and it is possible that they competed ecologically with placental predators that entered from North America in the Pleistocene. Description Phorusrhacids were large, flightless birds with long hind limbs, narrow pelvises, proportionally small wings and huge skulls, with a tall, long, sideways compressed hooked beak. Kelenken is the largest known phorusrhacid, about 10% larger than the largest phorusrhacids previously known, such as Phorusrhacos. The holotype skull is about long from the tip of the beak to the center of the sagittal nuchal crest at the upper back of the head (a size likened to the size of a horse's skull), making it the largest skull of any known bird. The hind end of the skull is wide. The tarsometatarsus leg bone is long. The head height was up to , while modern seriemas reach in height. While the weight of Kelenken has not been specifically estimated, it is thought to have exceeded . Skull Prior to the discovery of Kelenken, the skulls of incompletely known large phorusrhacids were reconstructed as scaled up versions of those of smaller, more complete relatives like Psilopterus and Patagornis, as exemplified by a frequently reproduced 1895 sketch of the destroyed skull of the large Phorusrhacos, which was itself based on that of Patagornis. These reconstructions highlighted their assumed very tall beaks, round, high eye sockets, and vaulted braincases, but Kelenken demonstrated the significant difference between the skulls of large and small members of the group. The holotype skull is very massive, and triangular when viewed from above, with the hind portion compressed from top to bottom. The upper beak is very long, exceeding half the total length of the skull, unlike in Mesembriornis and Patagornis, and is longer than that of Phorusrhacos. The ratio between the upper beak and the skull of Kelenken is 0.56, based on the distance between the bony nostril and the front tip. In spite of the crushing from top to bottom, the upper beak is high and very robust, though apparently not as high as in patagornithines, such as Patagornis, Andrewsornis and Andalgalornis. The front end of the premaxilla (the frontmost bone of the upper jaw) prominently projects as a sharp, downturned hook. Such a strong downwards projection resembles most closely the condition seen in large to medium sized phorusrhacids such as Phorusrhacos, Patagornis, Andrewsornis and Andalgalornis, rather than the weaker projections of the smaller psilopterines. The underside of the upper beak's front portion forms a pair of prominent ridges that are each separated by a groove from the tomium, or sharp edge of the beak. These ridges are also separated from a broader central portion of the premaxilla by a longitudinal groove (the rostral premaxillar canal). Patagornis had a similar morphology on the front part of the palate. Much of the upper beak's side is scarred by small, irregular pits, which functioned as nerve exits. The hindmost two thirds of the upper beak are excavated by a prominent furrow, which runs parallel to the margin of the tomium. The nostrils are small, rectangular, and are located in the upper hind corner of the upper beak as in patagornithines (the size and location of the nostrils is unknown in the larger phorusrhacines and brontornithines). The nostrils appear to be longer from front to back than high, though this may be exaggerated by crushing, and their hind margin is formed by the maxillary process of the nasal bone (a projection from the nasals towards the maxilla, the main bone of the upper jaw). Whether the nostrils are connected to each other at the middle (lacking a septum as in other phorusrhacids) is not discernible. The quadrangular shape of the antorbital fenestra (an opening in front of the eye socket) is clear despite it being crushed somewhat on both sides. The front border of this opening is approximately level with the hind margin of the nostril, and its lower margin is straight when viewed from the left side. Robust lacrimal bones form the hind margins of the antorbital fenestrae, and these bones were recessed in relation to the jugal bar (that formed the lower edge of the eye socket) and the outer side margin of each frontal bone (main bones of the forehead). The antorbital fenestra is proportionally smaller than that of Patagornis. While the shape of the eye sockets may be slightly affected by compression from top to bottom, it is likely they were low, almost rectangular in shape, with a concave upper margin and a slightly convex lower border. The upper part of the eye socket is delineated by a thick, rounded edge (a supraorbital ossification), the hind part of which appears to overhang downward as seen from the side. In Patagornis, a similar structure has been suggested to be a process of the lacrimal bone, and while the connection between these is not clear in Kelenken, this structure was probably also an extension of the lacrimal. The supraorbital ossification fits within a socket formed by a part of the frontal bone that forms the postorbital process, a configuration unknown in other phorusrhacids. The lower margin of the eye socket is formed by a robust jugal bar which is very tall (larger than that of Devincenzia), and flat from side to side. The jugal bone is about four times taller than thick by the lower center of the eyesocket, and its height is greater than in other phorusrhacids. The frontal bones appear to have been flat on their upper side. The area where the frontals would have contacted the premaxillae is damaged so that their sutures (joints between them) cannot be identified, but the sutures between the frontals and the nasals and parietals are fully fused. This fusion makes it difficult to identify how these bones were part of the skull roof, but the blunt, robust postorbital processes were probably mainly formed by the frontals. On their lower sides, each frontal forms a large depression where a jaw muscle attached. The postorbital process is separated narrowly from a robust zygomatic process, and these two projections enclose a narrow temporal fossa (opening at the temple). The postorbital process contains scars left by massive jaw muscles, parts of which invaded most of the skull roof at the level of the parietal bones. There is a well developed depression behind the zygomatic process, along the side of the squamosal bone, which corresponds to a jaw closing muscle. The subtemporal fossa further behind is broad and its back is defined by a blunt, sidewards extension of the nuchal crest. The maxillae form an extensive palate, with the side margins being almost parallel for most of the upper beak's length, and the palate becomes wider from the front back to the region of the eye sockets. Like in Patagornis, these bones are separated at the midline by a distinct, longitudinal depression running much of their length, and along the back half of the palate, this depression is flanked by portions of the maxillae. The side margin at the back of the maxilla has a sutured contact with the jugal which is well-defined, similar to Patagornis. The part of the skull roof behind the eye sockets is flat and scarred by the development of the temporal musculature. The occipital table is very wide, like in Devicenzia, and low, which gives it a rectangular appearance when viewed from behind. The occipital condyle (the rounded prominence at the back of the head which contacted with the first neck vertebra) is round with a vertical groove that originates on its upper surface, and reaches almost to the center of the condyle. The foramen magnum (the large opening at the base of the skull through which the spinal cord enters) is almost triangular, uniquely for this genus, and has a blunt upper apex, and it is slightly smaller than the condyle. Above the foramen magnum is a crest-like prominence, vertically extending from the edge of the foramen to the transverse nuchal crest. A fossa (shallow depression) under the condyle is not visible, differing from Patagornis and Devicenzia, whose fossae are distinct. Leg bone The shaft of the tarsometatarsus is somewhat slender, with an almost rectangular mid-section, similar to Phorusrhacos. The upper two thirds of its upper surface are concave, while the lower third is flatter. The tarsometatarsus has cotylae (two cup-like cavities at the upper end of the shaft) that are almost oval and deeply concave. The lateral cotyla on the outer side is smaller than the medial cotyla on the inner side, and is slightly below it. The intercotylar eminence between the cotylae is well developed and robust, as in other phorusrhacids. Unique to this genus, there is a round tubercle on the medioplantar corner of the lateral cotyla, lower in height than the intercotylar eminence. The middle of the shaft of the tarsometatarsus is irregularly quadrangular, which is different from that of brontornithines, which are rectangular and very wide. The trochlea of the third metatarsal (the "knuckles" of the tarsometatarsus which articulated with the upper part of the toe phalanges) is much bigger than the two other trochlea (second and fourth), and projects much further down, and the fourth trochlea is larger than the second. The fourth trochlea is irregularly quadrangular, which contrasts with the rectangular trochlea of Devicenzia. The distal vascular foramen, an opening on the lower front side of the tarsometatarsus, has a centralized position, above the upper ends of the third and fourth trochleae. Paleobiology Feeding and diet Phorusrhacids are thought to have been ground predators or scavengers, and have often been considered apex predators that dominated Cenozoic South America in the absence of placental mammalian predators, though they did co-exist with some large, carnivorous borhyaenid mammals. Earlier hypotheses of phorusrhacid feeding ecology were mainly inferred from them having large skulls with hooked beaks rather than through detailed hypotheses and biomechanical studies, and such studies of their running and predatory adaptations were only conducted from the beginning of the 21st century. Alvarenga and Elizabeth Höfling made some general remarks about phorusrhacid habits in a 2003 article. They were flightless, as evidenced by the proportional size of their wings and body mass, and wing-size was more reduced in larger members of the group. These researchers pointed out that the narrowing of the pelvis, upper maxilla and thorax could have been adaptations to enable the birds to search for and take smaller animals in tall plant growth or broken terrain. The large expansions above the eyes formed by the lacrimal bones (similar to what is seen in modern hawks) would have protected the eyes against the sun, and enabled keen eyesight, which indicates they hunted by sight in open, sunlit areas, and not shaded forests. Leg function In 2005, Rudemar Ernesto Blanco and Washington W. Jones examined the strength of the tibiotarsus (shin bone) of phorusrhacids to determine their speed, but conceded that such estimates can be unreliable even for extant animals. While the tibiotarsal strength of Patagornis and an indeterminate large phorusrhacine suggested a speed of , and that of Mesembriornis suggested , the latter is greater than that of a modern ostrich, approaching that of a cheetah, . They found these estimates unlikely due to the large body size of these birds, and instead suggested the strength could have been used to break the long bones of medium-sized mammals, the size, for example, of a saiga or Thomson's gazelle. This strength could be used for accessing the marrow inside the bones, or by using the legs as kicking weapons (like some modern ground birds do), consistent with the large, curved, and sideways compressed claws known in some phorusrhacids. They also suggested future studies could examine whether they could have used their beaks and claws against well-armored mammals such as armadillos and glyptodonts. According to Chiappe and Bertelli in 2006, the discovery of Kelenken shed doubt on the traditional idea that the size and agility of phorusrhacids correlated, with the larger members of the group being more bulky and less adapted for running. The long and slender tarsometatarsus of Kelenken instead shows that this bird may have been much swifter than the smaller, more heavyset and slow Brontornis. In a 2006 news article about the discovery, Chiappe stated that while Kelenken may not have been as swift as an ostrich, it could clearly run faster than had previously been assumed for large phorusrhacids, based on the long, slender leg-bones, superficially similar to those of the modern, flightless rhea. The article suggested that Kelenken would have been able to chase down small mammals and reptiles. In another 2006 news article, Chiappe stated that Kelenken would have been as quick as a greyhound, and that while there were other large predators in South America at the time, they were limited in numbers and not as fast and agile as the phorusrhacids, and the many grazing mammals would have provided ample prey. Chiappe stated that phorusrhacids crudely resembled earlier predatory dinosaurs like Tyrannosaurus, in having gigantic heads, very small forelimbs, and very long legs, and thereby had the same kind of meat-eater adaptations. Skull and neck function A 2010 study by Degrange and colleagues of the medium-sized phorusrhacid Andalgalornis, based on Finite Element Analysis using CT scans, estimated its bite force and stress distribution in its skull. They found its bite force to be 133 newtons at the bill tip, and showed it had lost a large degree of intracranial immobility (mobility of skull bones in relation to each other), as was also the case for other large phorusrhacids such as Kelenken. These researchers interpreted this loss as an adaptation for enhanced rigidity of the skull; compared to the modern red-legged seriema and white-tailed eagle, the skull of the phorusrhacid showed relatively high stress under sideways loadings, but low stress where force was applied up and down, and in simulations of "pullback". Due to the relative weakness of the skull at the sides and midline, these researchers considered it unlikely that Andalgalornis engaged in potentially risky behavior that involved using its beak to subdue large, struggling prey. Instead, they suggested that it either fed on smaller prey that could be killed and consumed more safely, by for example swallowing it whole, or that when targeting large prey, it used a series of well-targeted repetitive strikes with the beak, in a "attack-and-retreat" strategy. Struggling prey could also be restrained with the feet, despite the lack of sharp talons. A 2012 follow-up study by Tambussi and colleagues analyzed the flexibility of the neck of Andalgalornis, based on the morphology of its neck vertebrae, finding the neck to be divided into three sections. By manually manipulating the vertebrae, they concluded that the neck musculature and skeleton of Andalgalornis was adapted to carrying a large head, and for helping it rise from a maximum extension after a downwards strike, and the researchers assumed the same would be true for other large, big-headed phorusrhacids. A 2020 study of phorusrhacid skull morphology by Degrange found that there were two main morphotypes within the group, derived from a seriema-like ancestor. These were the "Psilopterine Skull Type", which was plesiomorphic (more similar to the ancestral type), and the "Terror Bird Skull Type", which included Kelenken and other large members, that was more specialized, with more rigid and stiff skulls. Despite the differences, studies have shown the two types handled prey similarly, while the more rigid skulls and resulting larger bite force of the "Terror Bird" type would have been an adaptation to handling larger prey. Paleoenvironment Kelenken was discovered in pyroclastic (rocks ejected by volcanic eruptions) outcrops belonging to the Collón Curá Formation in the southeastern corner of Comallo, Patagonia, an area covered in whitish tuffs. The area's stratigraphy had only been preliminarily studied at the time, and the age of the sediments had not been adequately determined, but compared with other fossil beds of the South American land mammal age and radioisotopic dating from different areas of the Collón Curá Formation, it is estimated to date to the Colloncuran age of the middle Miocene, about 15 million years ago. The formation was accumulated in a broken foreland system characterized by several basins that were disconnected from each other. The formation is composed mainly of volcaniclastic limestones and sandstones that were accumulated in continental environments ranging from alluvial (deposited by running water) to lacustrine (deposited by lakes). The Collón Curá Formation and the Colloncuran age of South America represent a time when more open environments with reduced plant covering predominated, similar to semi-arid and temperate to warm, dry woodlands or bushlands. The open environment allowed more cursorial (adapted for running) and large animals to occur, contrasting with the earlier conditions during the late Early Miocene, with its well-developed forests with tree-dwelling animals. Forests would then have been restricted to valleys of the cordillera mountain ranges, with few tree-dwelling species. This change happened progressively during the earlier Friasian stage. The transition towards more arid landscapes would have happened simultaneously with climate changes that corresponded to the Middle Miocene Climate Transition, a global cooling event which had a drying effect on continents. The Collón Curá Formation of Argentina has provided a wide assemblage of mammals, including at least 24 taxa such as the xenarthrans Megathericulus, Prepotherium, Prozaedyus and Paraeucinepeltus, the notoungulate Protypotherium, the astrapothere Astrapotherium, the sparassodonts Patagosmilus and Cladosictis, the marsupial Abderites, the primate Proteropithecia, and rodents such as Maruchito, Protacaremys, Neoreomys and Prolagostomus. In addition to the mammals that characterize sediments of this age, there are also a few fossils of birds, reptiles, amphibians and fish.
Biology and health sciences
Prehistoric birds
Animals
1122854
https://en.wikipedia.org/wiki/Equilibrium%20constant
Equilibrium constant
The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant. A knowledge of equilibrium constants is essential for the understanding of many chemical systems, as well as the biochemical processes such as oxygen transport by hemoglobin in blood and acid–base homeostasis in the human body. Stability constants, formation constants, binding constants, association constants and dissociation constants are all types of equilibrium constants. Basic definitions and properties For a system undergoing a reversible reaction described by the general chemical equation a thermodynamic equilibrium constant, denoted by , is defined to be the value of the reaction quotient Qt when forward and reverse reactions occur at the same rate. At chemical equilibrium, the chemical composition of the mixture does not change with time, and the Gibbs free energy change for the reaction is zero. If the composition of a mixture at equilibrium is changed by addition of some reagent, a new equilibrium position will be reached, given enough time. An equilibrium constant is related to the composition of the mixture at equilibrium by where {X} denotes the thermodynamic activity of reagent X at equilibrium, [X] the numerical value of the corresponding concentration in moles per liter, and γ the corresponding activity coefficient. If X is a gas, instead of [X] the numerical value of the partial pressure in bar is used. If it can be assumed that the quotient of activity coefficients, , is constant over a range of experimental conditions, such as pH, then an equilibrium constant can be derived as a quotient of concentrations. An equilibrium constant is related to the standard Gibbs free energy change of reaction by where R is the universal gas constant, T is the absolute temperature (in kelvins), and is the natural logarithm. This expression implies that must be a pure number and cannot have a dimension, since logarithms can only be taken of pure numbers. must also be a pure number. On the other hand, the reaction quotient at equilibrium does have the dimension of concentration raised to some power (see , below). Such reaction quotients are often referred to, in the biochemical literature, as equilibrium constants. For an equilibrium mixture of gases, an equilibrium constant can be defined in terms of partial pressure or fugacity. An equilibrium constant is related to the forward and backward rate constants, kf and kr of the reactions involved in reaching equilibrium: Types of equilibrium constants Cumulative and stepwise formation constants A cumulative or overall constant, given the symbol β, is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML2 is given by M + 2 L ML2; [ML2] = β12[M][L]2 The stepwise constant, K, for the formation of the same complex from ML and L is given by ML + L ML2; [ML2] = K[ML][L] = Kβ11[M][L]2 It follows that β12 = Kβ11 A cumulative constant can always be expressed as the product of stepwise constants. There is no agreed notation for stepwise constants, though a symbol such as K is sometimes found in the literature. It is best always to define each stability constant by reference to an equilibrium expression. Competition method A particular use of a stepwise constant is in the determination of stability constant values outside the normal range for a given method. For example, EDTA complexes of many metals are outside the range for the potentiometric method. The stability constants for those complexes were determined by competition with a weaker ligand. ML + L′ ML′ + L The formation constant of [Pd(CN)4]2− was determined by the competition method. Association and dissociation constants In organic chemistry and biochemistry it is customary to use pKa values for acid dissociation equilibria. where log denotes a logarithm to base 10 or common logarithm, and Kdiss is a stepwise acid dissociation constant. For bases, the base association constant, pKb is used. For any given acid or base the two constants are related by , so pKa can always be used in calculations. On the other hand, stability constants for metal complexes, and binding constants for host–guest complexes are generally expressed as association constants. When considering equilibria such as M + HL ML + H it is customary to use association constants for both ML and HL. Also, in generalized computer programs dealing with equilibrium constants it is general practice to use cumulative constants rather than stepwise constants and to omit ionic charges from equilibrium expressions. For example, if NTA, nitrilotriacetic acid, N(CH2CO2H)3 is designated as H3L and forms complexes ML and MHL with a metal ion M, the following expressions would apply for the dissociation constants. The cumulative association constants can be expressed as Note how the subscripts define the stoichiometry of the equilibrium product. Micro-constants When two or more sites in an asymmetrical molecule may be involved in an equilibrium reaction there are more than one possible equilibrium constants. For example, the molecule -DOPA has two non-equivalent hydroxyl groups which may be deprotonated. Denoting -DOPA as LH2, the following diagram shows all the species that may be formed (X = ). The concentration of the species LH is equal to the sum of the concentrations of the two micro-species with the same chemical formula, labelled L1H and L2H. The constant K2 is for a reaction with these two micro-species as products, so that [LH] = [L1H] + [L2H] appears in the numerator, and it follows that this macro-constant is equal to the sum of the two micro-constants for the component reactions. K2 = k21 + k22 However, the constant K1 is for a reaction with these two micro-species as reactants, and [LH] = [L1H] + [L2H] in the denominator, so that in this case 1/K1 =1/ k11 + 1/k12, and therefore K1 =k11 k12 / (k11 + k12). Thus, in this example there are four micro-constants whose values are subject to two constraints; in consequence, only the two macro-constant values, for K1 and K2 can be derived from experimental data. Micro-constant values can, in principle, be determined using a spectroscopic technique, such as infrared spectroscopy, where each micro-species gives a different signal. Methods which have been used to estimate micro-constant values include Chemical: blocking one of the sites, for example by methylation of a hydroxyl group, followed by determination of the equilibrium constant of the related molecule, from which the micro-constant value for the "parent" molecule may be estimated. Mathematical: applying numerical procedures to 13C NMR data. Although the value of a micro-constant cannot be determined from experimental data, site occupancy, which is proportional to the micro-constant value, can be very important for biological activity. Therefore, various methods have been developed for estimating micro-constant values. For example, the isomerization constant for -DOPA has been estimated to have a value of 0.9, so the micro-species L1H and L2H have almost equal concentrations at all pH values. pH considerations (Brønsted constants) pH is defined in terms of the activity of the hydrogen ion pH = −log10 {H+} In the approximation of ideal behaviour, activity is replaced by concentration. pH is measured by means of a glass electrode, a mixed equilibrium constant, also known as a Brønsted constant, may result. HL L + H; It all depends on whether the electrode is calibrated by reference to solutions of known activity or known concentration. In the latter case the equilibrium constant would be a concentration quotient. If the electrode is calibrated in terms of known hydrogen ion concentrations it would be better to write p[H] rather than pH, but this suggestion is not generally adopted. Hydrolysis constants In aqueous solution the concentration of the hydroxide ion is related to the concentration of the hydrogen ion by \mathit{K}_W =[H][OH] [OH]=\mathit{K}_W[H]^{-1} The first step in metal ion hydrolysis can be expressed in two different ways It follows that . Hydrolysis constants are usually reported in the β* form and therefore often have values much less than 1. For example, if and so that β* = 10−10. In general when the hydrolysis product contains n hydroxide groups Conditional constants Conditional constants, also known as apparent constants, are concentration quotients which are not true equilibrium constants but can be derived from them. A very common instance is where pH is fixed at a particular value. For example, in the case of iron(III) interacting with EDTA, a conditional constant could be defined by This conditional constant will vary with pH. It has a maximum at a certain pH. That is the pH where the ligand sequesters the metal most effectively. In biochemistry equilibrium constants are often measured at a pH fixed by means of a buffer solution. Such constants are, by definition, conditional and different values may be obtained when using different buffers. Gas-phase equilibria For equilibria in a gas phase, fugacity, f, is used in place of activity. However, fugacity has the dimension of pressure, so it must be divided by a standard pressure, usually 1 bar, in order to produce a dimensionless quantity, . An equilibrium constant is expressed in terms of the dimensionless quantity. For example, for the equilibrium 2NO2 N2O4, Fugacity is related to partial pressure, , by a dimensionless fugacity coefficient ϕ: . Thus, for the example, Usually the standard pressure is omitted from such expressions. Expressions for equilibrium constants in the gas phase then resemble the expression for solution equilibria with fugacity coefficient in place of activity coefficient and partial pressure in place of concentration. Thermodynamic basis for equilibrium constant expressions Thermodynamic equilibrium is characterized by the free energy for the whole (closed) system being a minimum. For systems at constant temperature and pressure the Gibbs free energy is minimum. The slope of the reaction free energy with respect to the extent of reaction, ξ, is zero when the free energy is at its minimum value. The free energy change, dGr, can be expressed as a weighted sum of change in amount times the chemical potential, the partial molar free energy of the species. The chemical potential, μi, of the ith species in a chemical reaction is the partial derivative of the free energy with respect to the number of moles of that species, Ni A general chemical equilibrium can be written as where nj are the stoichiometric coefficients of the reactants in the equilibrium equation, and mj are the coefficients of the products. At equilibrium The chemical potential, μi, of the ith species can be calculated in terms of its activity, ai. μ is the standard chemical potential of the species, R is the gas constant and T is the temperature. Setting the sum for the reactants j to be equal to the sum for the products, k, so that δGr(Eq) = 0 Rearranging the terms, This relates the standard Gibbs free energy change, ΔGo to an equilibrium constant, K, the reaction quotient of activity values at equilibrium. Equivalence of thermodynamic and kinetic expressions for equilibrium constants At equilibrium the rate of the forward reaction is equal to the backward reaction rate. A simple reaction, such as ester hydrolysis AB + H2O <=> AH + B(OH) has reaction rates given by expressions According to Guldberg and Waage, equilibrium is attained when the forward and backward reaction rates are equal to each other. In these circumstances, an equilibrium constant is defined to be equal to the ratio of the forward and backward reaction rate constants . The concentration of water may be taken to be constant, resulting in the simpler expression . This particular concentration quotient, , has the dimension of concentration, but the thermodynamic equilibrium constant, , is always dimensionless. Unknown activity coefficient values It is very rare for activity coefficient values to have been determined experimentally for a system at equilibrium. There are three options for dealing with the situation where activity coefficient values are not known from experimental measurements. Use calculated activity coefficients, together with concentrations of reactants. For equilibria in solution estimates of the activity coefficients of charged species can be obtained using Debye–Hückel theory, an extended version, or SIT theory. For uncharged species, the activity coefficient γ0 mostly follows a "salting-out" model: log10 γ0 = bI where I stands for ionic strength. Assume that the activity coefficients are all equal to 1. This is acceptable when all concentrations are very low. For equilibria in solution use a medium of high ionic strength. In effect this redefines the standard state as referring to the medium. Activity coefficients in the standard state are, by definition, equal to 1. The value of an equilibrium constant determined in this manner is dependent on the ionic strength. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories. Dimensionality An equilibrium constant is related to the standard Gibbs free energy of reaction change, , for the reaction by the expression Therefore, K, must be a dimensionless number from which a logarithm can be derived. In the case of a simple equilibrium A + B <=> AB, the thermodynamic equilibrium constant is defined in terms of the activities, {AB}, {A} and {B}, of the species in equilibrium with each other: Now, each activity term can be expressed as a product of a concentration and a corresponding activity coefficient, . Therefore, When , the quotient of activity coefficients, is set equal to 1, we get K then appears to have the dimension of 1/concentration. This is what usually happens in practice when an equilibrium constant is calculated as a quotient of concentration values. This can be avoided by dividing each concentration by its standard-state value (usually mol/L or bar), which is standard practice in chemistry. The assumption underlying this practice is that the quotient of activities is constant under the conditions in which the equilibrium constant value is determined. These conditions are usually achieved by keeping the reaction temperature constant and by using a medium of relatively high ionic strength as the solvent. It is not unusual, particularly in texts relating to biochemical equilibria, to see an equilibrium constant value quoted with a dimension. The justification for this practice is that the concentration scale used may be either mol dm−3 or mmol dm−3, so that the concentration unit has to be stated in order to avoid there being any ambiguity. Note. When the concentration values are measured on the mole fraction scale all concentrations and activity coefficients are dimensionless quantities. In general equilibria between two reagents can be expressed as {\mathit{p}A} + \mathit{q}B <=> A_\mathit{p}B_\mathit{q} , in which case the equilibrium constant is defined, in terms of numerical concentration values, as The apparent dimension of this K value is concentration1−p−q; this may be written as M(1−p−q) or mM(1−p−q), where the symbol M signifies a molar concentration (). The apparent dimension of a dissociation constant is the reciprocal of the apparent dimension of the corresponding association constant, and vice versa. When discussing the thermodynamics of chemical equilibria it is necessary to take dimensionality into account. There are two possible approaches. Set the dimension of to be the reciprocal of the dimension of the concentration quotient. This is almost universal practice in the field of stability constant determinations. The "equilibrium constant" , is dimensionless. It will be a function of the ionic strength of the medium used for the determination. Setting the numerical value of to be 1 is equivalent to re-defining the standard states. Replace each concentration term by the dimensionless quotient , where is the concentration of reagent in its standard state (usually 1 mol/L or 1 bar). By definition the numerical value of is 1, so also has a numerical value of 1. In both approaches the numerical value of the stability constant is unchanged. The first is more useful for practical purposes; in fact, the unit of the concentration quotient is often attached to a published stability constant value in the biochemical literature. The second approach is consistent with the standard exposition of Debye–Hückel theory, where , etc. are taken to be pure numbers. Water as both reactant and solvent For reactions in aqueous solution, such as an acid dissociation reaction AH + H2O A− + H3O+ the concentration of water may be taken as being constant and the formation of the hydronium ion is implicit. AH A− + H+ Water concentration is omitted from expressions defining equilibrium constants, except when solutions are very concentrated. (K defined as a dissociation constant) Similar considerations apply to metal ion hydrolysis reactions. Enthalpy and entropy: temperature dependence If both the equilibrium constant, and the standard enthalpy change, , for a reaction have been determined experimentally, the standard entropy change for the reaction is easily derived. Since and To a first approximation the standard enthalpy change is independent of temperature. Using this approximation, definite integration of the van 't Hoff equation gives This equation can be used to calculate the value of log K at a temperature, T2, knowing the value at temperature T1. The van 't Hoff equation also shows that, for an exothermic reaction (), when temperature increases K decreases and when temperature decreases K increases, in accordance with Le Chatelier's principle. The reverse applies when the reaction is endothermic. When K has been determined at more than two temperatures, a straight line fitting procedure may be applied to a plot of against to obtain a value for . Error propagation theory can be used to show that, with this procedure, the error on the calculated value is much greater than the error on individual log K values. Consequently, K needs to be determined to high precision when using this method. For example, with a silver ion-selective electrode each log K value was determined with a precision of ca. 0.001 and the method was applied successfully. Standard thermodynamic arguments can be used to show that, more generally, enthalpy will change with temperature. where Cp is the heat capacity at constant pressure. A more complex formulation The calculation of K at a particular temperature from a known K at another given temperature can be approached as follows if standard thermodynamic properties are available. The effect of temperature on equilibrium constant is equivalent to the effect of temperature on Gibbs energy because: where ΔrGo is the reaction standard Gibbs energy, which is the sum of the standard Gibbs energies of the reaction products minus the sum of standard Gibbs energies of reactants. Here, the term "standard" denotes the ideal behaviour (i.e., an infinite dilution) and a hypothetical standard concentration (typically 1 mol/kg). It does not imply any particular temperature or pressure because, although contrary to IUPAC recommendation, it is more convenient when describing aqueous systems over wide temperature and pressure ranges. The standard Gibbs energy (for each species or for the entire reaction) can be represented (from the basic definitions) as: In the above equation, the effect of temperature on Gibbs energy (and thus on the equilibrium constant) is ascribed entirely to heat capacity. To evaluate the integrals in this equation, the form of the dependence of heat capacity on temperature needs to be known. If the standard molar heat capacity C can be approximated by some analytic function of temperature (e.g. the Shomate equation), then the integrals involved in calculating other parameters may be solved to yield analytic expressions for them. For example, using approximations of the following forms: For pure substances (solids, gas, liquid): For ionic species at : then the integrals can be evaluated and the following final form is obtained: The constants A, B, C, a, b and the absolute entropy, S̆, required for evaluation of C(T), as well as the values of G298 K and S298 K for many species are tabulated in the literature. Pressure dependence The pressure dependence of the equilibrium constant is usually weak in the range of pressures normally encountered in industry, and therefore, it is usually neglected in practice. This is true for condensed reactant/products (i.e., when reactants and products are solids or liquid) as well as gaseous ones. For a gaseous-reaction example, one may consider the well-studied reaction of hydrogen with nitrogen to produce ammonia: N2 + 3 H2 2 NH3 If the pressure is increased by the addition of an inert gas, then neither the composition at equilibrium nor the equilibrium constant are appreciably affected (because the partial pressures remain constant, assuming an ideal-gas behaviour of all gases involved). However, the composition at equilibrium will depend appreciably on pressure when: the pressure is changed by compression or expansion of the gaseous reacting system, and the reaction results in the change of the number of moles of gas in the system. In the example reaction above, the number of moles changes from 4 to 2, and an increase of pressure by system compression will result in appreciably more ammonia in the equilibrium mixture. In the general case of a gaseous reaction: α A + β B σ S + τ T the change of mixture composition with pressure can be quantified using: where p denote the partial pressures and X the mole fractions of the components, P is the total system pressure, Kp is the equilibrium constant expressed in terms of partial pressures and KX is the equilibrium constant expressed in terms of mole fractions. The above change in composition is in accordance with Le Chatelier's principle and does not involve any change of the equilibrium constant with the total system pressure. Indeed, for ideal-gas reactions Kp is independent of pressure. In a condensed phase, the pressure dependence of the equilibrium constant is associated with the reaction volume. For reaction: α A + β B σ S + τ T the reaction volume is: where V̄ denotes a partial molar volume of a reactant or a product. For the above reaction, one can expect the change of the reaction equilibrium constant (based either on mole-fraction or molal-concentration scale) with pressure at constant temperature to be: The matter is complicated as partial molar volume is itself dependent on pressure. Effect of isotopic substitution Isotopic substitution can lead to changes in the values of equilibrium constants, especially if hydrogen is replaced by deuterium (or tritium). This equilibrium isotope effect is analogous to the kinetic isotope effect on rate constants, and is primarily due to the change in zero-point vibrational energy of H–X bonds due to the change in mass upon isotopic substitution. The zero-point energy is inversely proportional to the square root of the mass of the vibrating hydrogen atom, and will therefore be smaller for a D–X bond that for an H–X bond. An example is a hydrogen atom abstraction reaction R' + H–R R'–H + R with equilibrium constant KH, where R' and R are organic radicals such that R' forms a stronger bond to hydrogen than does R. The decrease in zero-point energy due to deuterium substitution will then be more important for R'–H than for R–H, and R'–D will be stabilized more than R–D, so that the equilibrium constant KD for R' + D–R R'–D + R is greater than KH. This is summarized in the rule the heavier atom favors the stronger bond. Similar effects occur in solution for acid dissociation constants (Ka) which describe the transfer of H+ or D+ from a weak aqueous acid to a solvent molecule: HA + H2O = H3O+ + A− or DA + D2O D3O+ + A−. The deuterated acid is studied in heavy water, since if it were dissolved in ordinary water the deuterium would rapidly exchange with hydrogen in the solvent. The product species H3O+ (or D3O+) is a stronger acid than the solute acid, so that it dissociates more easily, and its H–O (or D–O) bond is weaker than the H–A (or D–A) bond of the solute acid. The decrease in zero-point energy due to isotopic substitution is therefore less important in D3O+ than in DA so that KD < KH, and the deuterated acid in D2O is weaker than the non-deuterated acid in H2O. In many cases the difference of logarithmic constants pKD – pKH is about 0.6, so that the pD corresponding to 50% dissociation of the deuterated acid is about 0.6 units higher than the pH for 50% dissociation of the non-deuterated acid. For similar reasons the self-ionization of heavy water is less than that of ordinary water at the same temperature.
Physical sciences
Thermodynamics
Chemistry
1123615
https://en.wikipedia.org/wiki/Herbig%E2%80%93Haro%20object
Herbig–Haro object
Herbig–Haro (HH) objects are bright patches of nebulosity associated with newborn stars. They are formed when narrow jets of partially ionised gas ejected by stars collide with nearby clouds of gas and dust at several hundred kilometers per second. Herbig–Haro objects are commonly found in star-forming regions, and several are often seen around a single star, aligned with its rotational axis. Most of them lie within about one parsec (3.26 light-years) of the source, although some have been observed several parsecs away. HH objects are transient phenomena that last around a few tens of thousands of years. They can change visibly over timescales of a few years as they move rapidly away from their parent star into the gas clouds of interstellar space (the interstellar medium or ISM). Hubble Space Telescope observations have revealed the complex evolution of HH objects over the period of a few years, as parts of the nebula fade while others brighten as they collide with the clumpy material of the interstellar medium. First observed in the late 19th century by Sherburne Wesley Burnham, Herbig–Haro objects were recognised as a distinct type of emission nebula in the 1940s. The first astronomers to study them in detail were George Herbig and Guillermo Haro, after whom they have been named. Herbig and Haro were working independently on studies of star formation when they first analysed the objects, and recognised that they were a by-product of the star formation process. Although HH objects are visible-wavelength phenomena, many remain invisible at these wavelengths due to dust and gas, and can only be detected at infrared wavelengths. Such objects, when observed in near-infrared, are called molecular hydrogen emission-line objects (MHOs). Discovery and history of observations The first HH object was observed in the late 19th century by Sherburne Wesley Burnham, when he observed the star T Tauri with the refracting telescope at Lick Observatory and noted a small patch of nebulosity nearby. It was thought to be an emission nebula, later becoming known as Burnham's Nebula, and was not recognized as a distinct class of object. T Tauri was found to be a very young and variable star, and is the prototype of the class of similar objects known as T Tauri stars which have yet to reach a state of hydrostatic equilibrium between gravitational collapse and energy generation through nuclear fusion at their centres. Fifty years after Burnham's discovery, several similar nebulae were discovered with almost star-like appearance. Both George Herbig and Guillermo Haro made independent observations of several of these objects in the Orion Nebula during the 1940s. Herbig also looked at Burnham's Nebula and found it displayed an unusual electromagnetic spectrum, with prominent emission lines of hydrogen, sulfur and oxygen. Haro found that all the objects of this type were invisible in infrared light. Following their independent discoveries, Herbig and Haro met at an astronomy conference in Tucson, Arizona in December 1949. Herbig had initially paid little attention to the objects he had discovered, being primarily concerned with the nearby stars, but on hearing Haro's findings he carried out more detailed studies of them. The Soviet astronomer Viktor Ambartsumian gave the objects their name (Herbig–Haro objects, normally shortened to HH objects), and based on their occurrence near young stars (a few hundred thousand years old), suggested they might represent an early stage in the formation of T Tauri stars. Studies of the HH objects showed they were highly ionised, and early theorists speculated that they were reflection nebulae containing low-luminosity hot stars deep inside. But the absence of infrared radiation from the nebulae meant there could not be stars within them, as these would have emitted abundant infrared light. In 1975 American astronomer R. D. Schwartz theorized that winds from T Tauri stars produce shocks in the ambient medium on encounter, resulting in generation of visible light. With the discovery of the first proto-stellar jet in HH 46/47, it became clear that HH objects are indeed shock-induced phenomena with shocks being driven by a collimated jet from protostars. Formation Stars form by gravitational collapse of interstellar gas clouds. As the collapse increases the density, radiative energy loss decreases due to increased opacity. This raises the temperature of the cloud which prevents further collapse, and a hydrostatic equilibrium is established. Gas continues to fall towards the core in a rotating disk. The core of this system is called a protostar. Some of the accreting material is ejected out along the star's axis of rotation in two jets of partially ionised gas (plasma). The mechanism for producing these collimated bipolar jets is not entirely understood, but it is believed that interaction between the accretion disk and the stellar magnetic field accelerates some of the accreting material from within a few astronomical units of the star away from the disk plane. At these distances the outflow is divergent, fanning out at an angle in the range of 10−30°, but it becomes increasingly collimated at distances of tens to hundreds of astronomical units from the source, as its expansion is constrained. The jets also carry away the excess angular momentum resulting from accretion of material onto the star, which would otherwise cause the star to rotate too rapidly and disintegrate. When these jets collide with the interstellar medium, they give rise to the small patches of bright emission which comprise HH objects. Properties Electromagnetic emission from HH objects is caused when their associated shock waves collide with the interstellar medium, creating what is called the "terminal working surfaces". The spectrum is continuous, but also has intense emission lines of neutral and ionized species. Spectroscopic observations of HH objects' doppler shifts indicate velocities of several hundred kilometers per second, but the emission lines in those spectra are weaker than what would be expected from such high-speed collisions. This suggests that some of the material they are colliding with is also moving along the beam, although at a lower speed. Spectroscopic observations of HH objects show they are moving away from the source stars at speeds of several hundred kilometres per second. In recent years, the high optical resolution of the Hubble Space Telescope has revealed the proper motion (movement along the sky plane) of many HH objects in observations spaced several years apart. As they move away from the parent star, HH objects evolve significantly, varying in brightness on timescales of a few years. Individual compact knots or clumps within an object may brighten and fade or disappear entirely, while new knots have been seen to appear. These arise likely because of the precession of their jets, along with the pulsating and intermittent eruptions from their parent stars. Faster jets catch up with earlier slower jets, creating the so-called "internal working surfaces", where streams of gas collide and generate shock waves and consequent emissions. The total mass being ejected by stars to form typical HH objects is estimated to be of the order of 10−8 to 10−6 per year, a very small amount of material compared to the mass of the stars themselves but amounting to about 1–10% of the total mass accreted by the source stars in a year. Mass loss tends to decrease with increasing age of the source. The temperatures observed in HH objects are typically about 9,000–12,000 K, similar to those found in other ionized nebulae such as H II regions and planetary nebulae. Densities, on the other hand, are higher than in other nebulae, ranging from a few thousand to a few tens of thousands of particles per cm3, compared to a few thousand particles per cm3 in most H II regions and planetary nebulae. Densities also decrease as the source evolves over time. HH objects consist mostly of hydrogen and helium, which account for about 75% and 24% of their mass respectively. Around 1% of the mass of HH objects is made up of heavier chemical elements, including oxygen, sulfur, nitrogen, iron, calcium and magnesium. Abundances of these elements, determined from emission lines of respective ions, are generally similar to their cosmic abundances. Many chemical compounds found in the surrounding interstellar medium, but not present in the source material, such as metal hydrides, are believed to have been produced by shock-induced chemical reactions. Around 20–30% of the gas in HH objects is ionized near the source star, but this proportion decreases at increasing distances. This implies the material is ionized in the polar jet, and recombines as it moves away from the star, rather than being ionized by later collisions. Shocking at the end of the jet can re-ionise some material, giving rise to bright "caps". Numbers and distribution HH objects are named approximately in order of their identification; HH 1/2 being the earliest such objects to be identified. More than a thousand individual objects are now known. They are always present in star-forming H II regions, and are often found in large groups. They are typically observed near Bok globules (dark nebulae which contain very young stars) and often emanate from them. Several HH objects have been seen near a single energy source, forming a string of objects along the line of the polar axis of the parent star. The number of known HH objects has increased rapidly over the last few years, but that is a very small proportion of the estimated up to 150,000 in the Milky Way, the vast majority of which are too far away to be resolved. Most HH objects lie within about one parsec of their parent star. Many, however, are seen several parsecs away. HH 46/47 is located about away from the Sun and is powered by a class I protostar binary. The bipolar jet is slamming into the surrounding medium at a velocity of 300 kilometers per second, producing two emission caps about apart. Jet outflow is accompanied by a long molecular gas outflow which is swept up by the jet itself. Infrared studies by Spitzer Space Telescope have revealed a variety of chemical compounds in the molecular outflow, including water (ice), methanol, methane, carbon dioxide (dry ice) and various silicates. Located around away in the Orion A molecular cloud, HH 34 is produced by a highly collimated bipolar jet powered by a class I protostar. Matter in the jet is moving at about 220 kilometers per second. Two bright bow shocks, separated by about , are present on the opposite sides of the source, followed by series of fainter ones at larger distances, making the whole complex about long. The jet is surrounded by a long weak molecular outflow near the source. Source stars The stars from which HH jets are emitted are all very young stars, a few tens of thousands to about a million years old. The youngest of these are still protostars in the process of collecting from their surrounding gases. Astronomers divide these stars into classes 0, I, II and III, according to how much infrared radiation the stars emit. A greater amount of infrared radiation implies a larger amount of cooler material surrounding the star, which indicates it is still coalescing. The numbering of the classes arises because class 0 objects (the youngest) were not discovered until classes I, II and III had already been defined. Class 0 objects are only a few thousand years old; so young that they are not yet undergoing nuclear fusion reactions at their centres. Instead, they are powered only by the gravitational potential energy released as material falls onto them. They mostly contain molecular outflows with low velocities (less than a hundred kilometres per second) and weak emissions in the outflows. Nuclear fusion has begun in the cores of Class I objects, but gas and dust are still falling onto their surfaces from the surrounding nebula, and most of their luminosity is accounted for by gravitational energy. They are generally still shrouded in dense clouds of dust and gas, which obscure all their visible light and as a result can only be observed at infrared and radio wavelengths. Outflows from this class are dominated by ionized species and velocities can range up to 400 kilometres per second. The in-fall of gas and dust has largely finished in Class II objects (Classical T Tauri stars), but they are still surrounded by disks of dust and gas, and produce weak outflows of low luminosity. Class III objects (Weak-line T Tauri stars) have only trace remnants of their original accretion disk. About 80% of the stars giving rise to HH objects are binary or multiple systems (two or more stars orbiting each other), which is a much higher proportion than that found for low mass stars on the main sequence. This may indicate that binary systems are more likely to generate the jets which give rise to HH objects, and evidence suggests the largest HH outflows might be formed when multiple–star systems disintegrate. It is thought that most stars originate from multiple star systems, but that a sizable fraction of these systems are disrupted before their stars reach the main sequence due to gravitational interactions with nearby stars and dense clouds of gas. The first and currently only (as of May 2017) large-scale Herbig-Haro object around a proto-brown dwarf is HH 1165, which is connected to the proto-brown dwarf Mayrit 1701117. HH 1165 has a length of 0.8 light-years (0.26 parsec) and is located in the vicinity of the sigma Orionis cluster. Previously only small mini-jets (≤0.03 parsec) were found around proto-brown dwarfs. Infrared counterparts HH objects associated with very young stars or very massive protostars are often hidden from view at optical wavelengths by the cloud of gas and dust from which they form. The intervening material can diminish the visual magnitude by factors of tens or even hundreds at optical wavelengths. Such deeply embedded objects can only be observed at infrared or radio wavelengths, usually in the frequencies of hot molecular hydrogen or warm carbon monoxide emission. In recent years, infrared images have revealed dozens of examples of "infrared HH objects". Most look like bow waves (similar to the waves at the head of a ship), and so are usually referred to as molecular "bow shocks". The physics of infrared bow shocks can be understood in much the same way as that of HH objects, since these objects are essentially the same – supersonic shocks driven by collimated jets from the opposite poles of a protostar. It is only the conditions in the jet and surrounding cloud that are different, causing infrared emission from molecules rather than optical emission from atoms and ions. In 2009 the acronym "MHO", for Molecular Hydrogen emission-line Object, was approved for such objects, detected in near-infrared, by the International Astronomical Union Working Group on Designations, and has been entered into their on-line Reference Dictionary of Nomenclature of Celestial Objects. As of 2010, almost 1000 objects are contained in the MHO catalog. Ultraviolet Herbig-Haro objects HH objects have been observed in the ultraviolet spectrum.
Physical sciences
Stellar astronomy
Astronomy
1123902
https://en.wikipedia.org/wiki/Biosynthesis
Biosynthesis
Biosynthesis, i.e., chemical synthesis occurring in biological contexts, is a term most often referring to multi-step, enzyme-catalyzed processes where chemical substances absorbed as nutrients (or previously converted through biosynthesis) serve as enzyme substrates, with conversion by the living organism either into simpler or more complex products. Examples of biosynthetic pathways include those for the production of amino acids, lipid membrane components, and nucleotides, but also for the production of all classes of biological macromolecules, and of acetyl-coenzyme A, adenosine triphosphate, nicotinamide adenine dinucleotide and other key intermediate and transactional molecules needed for metabolism. Thus, in biosynthesis, any of an array of compounds, from simple to complex, are converted into other compounds, and so it includes both the catabolism and anabolism (building up and breaking down) of complex molecules (including macromolecules). Biosynthetic processes are often represented via charts of metabolic pathways. A particular biosynthetic pathway may be located within a single cellular organelle (e.g., mitochondrial fatty acid synthesis pathways), while others involve enzymes that are located across an array of cellular organelles and structures (e.g., the biosynthesis of glycosylated cell surface proteins). Elements of biosynthesis Elements of biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavourable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the simplest sense, the reactions that occur in biosynthesis have the following format: Reactant ->[][enzyme] Product Some variations of this basic equation which will be discussed later in more detail are: Simple compounds which are converted into other compounds, usually as part of a multiple step reaction pathway. Two examples of this type of reaction occur during the formation of nucleic acids and the charging of tRNA prior to translation. For some of these steps, chemical energy is required: {Precursor~molecule} + ATP <=> {product~AMP} + PP_i Simple compounds that are converted into other compounds with the assistance of cofactors. For example, the synthesis of phospholipids requires acetyl CoA, while the synthesis of another membrane component, sphingolipids, requires NADH and FADH for the formation the sphingosine backbone. The general equation for these examples is: {Precursor~molecule} + Cofactor ->[][enzyme] macromolecule Simple compounds that join to create a macromolecule. For example, fatty acids join to form phospholipids. In turn, phospholipids and cholesterol interact noncovalently in order to form the lipid bilayer. This reaction may be depicted as follows: {Molecule~1} + Molecule~2 -> macromolecule Lipid Many intricate macromolecules are synthesized in a pattern of simple, repeated structures. For example, the simplest structures of lipids are fatty acids. Fatty acids are hydrocarbon derivatives; they contain a carboxyl group "head" and a hydrocarbon chain "tail". These fatty acids create larger components, which in turn incorporate noncovalent interactions to form the lipid bilayer. Fatty acid chains are found in two major components of membrane lipids: phospholipids and sphingolipids. A third major membrane component, cholesterol, does not contain these fatty acid units. Eukaryotic phospholipids The foundation of all biomembranes consists of a bilayer structure of phospholipids. The phospholipid molecule is amphipathic; it contains a hydrophilic polar head and a hydrophobic nonpolar tail. The phospholipid heads interact with each other and aqueous media, while the hydrocarbon tails orient themselves in the center, away from water. These latter interactions drive the bilayer structure that acts as a barrier for ions and molecules. There are various types of phospholipids; consequently, their synthesis pathways differ. However, the first step in phospholipid synthesis involves the formation of phosphatidate or diacylglycerol 3-phosphate at the endoplasmic reticulum and outer mitochondrial membrane. The synthesis pathway is found below: The pathway starts with glycerol 3-phosphate, which gets converted to lysophosphatidate via the addition of a fatty acid chain provided by acyl coenzyme A. Then, lysophosphatidate is converted to phosphatidate via the addition of another fatty acid chain contributed by a second acyl CoA; all of these steps are catalyzed by the glycerol phosphate acyltransferase enzyme. Phospholipid synthesis continues in the endoplasmic reticulum, and the biosynthesis pathway diverges depending on the components of the particular phospholipid. Sphingolipids Like phospholipids, these fatty acid derivatives have a polar head and nonpolar tails. Unlike phospholipids, sphingolipids have a sphingosine backbone. Sphingolipids exist in eukaryotic cells and are particularly abundant in the central nervous system. For example, sphingomyelin is part of the myelin sheath of nerve fibers. Sphingolipids are formed from ceramides that consist of a fatty acid chain attached to the amino group of a sphingosine backbone. These ceramides are synthesized from the acylation of sphingosine. The biosynthetic pathway for sphingosine is found below: As the image denotes, during sphingosine synthesis, palmitoyl CoA and serine undergo a condensation reaction which results in the formation of 3-dehydrosphinganine. This product is then reduced to form dihydrospingosine, which is converted to sphingosine via the oxidation reaction by FAD. Cholesterol This lipid belongs to a class of molecules called sterols. Sterols have four fused rings and a hydroxyl group. Cholesterol is a particularly important molecule. Not only does it serve as a component of lipid membranes, it is also a precursor to several steroid hormones, including cortisol, testosterone, and estrogen. Cholesterol is synthesized from acetyl CoA. The pathway is shown below: More generally, this synthesis occurs in three stages, with the first stage taking place in the cytoplasm and the second and third stages occurring in the endoplasmic reticulum. The stages are as follows: 1. The synthesis of isopentenyl pyrophosphate, the "building block" of cholesterol 2. The formation of squalene via the condensation of six molecules of isopentenyl phosphate 3. The conversion of squalene into cholesterol via several enzymatic reactions Nucleotides The biosynthesis of nucleotides involves enzyme-catalyzed reactions that convert substrates into more complex products. Nucleotides are the building blocks of DNA and RNA. Nucleotides are composed of a five-membered ring formed from ribose sugar in RNA, and deoxyribose sugar in DNA; these sugars are linked to a purine or pyrimidine base with a glycosidic bond and a phosphate group at the 5' location of the sugar. Purine nucleotides The DNA nucleotides adenosine and guanosine consist of a purine base attached to a ribose sugar with a glycosidic bond. In the case of RNA nucleotides deoxyadenosine and deoxyguanosine, the purine bases are attached to a deoxyribose sugar with a glycosidic bond. The purine bases on DNA and RNA nucleotides are synthesized in a twelve-step reaction mechanism present in most single-celled organisms. Higher eukaryotes employ a similar reaction mechanism in ten reaction steps. Purine bases are synthesized by converting phosphoribosyl pyrophosphate (PRPP) to inosine monophosphate (IMP), which is the first key intermediate in purine base biosynthesis. Further enzymatic modification of IMP produces the adenosine and guanosine bases of nucleotides. The first step in purine biosynthesis is a condensation reaction, performed by glutamine-PRPP amidotransferase. This enzyme transfers the amino group from glutamine to PRPP, forming 5-phosphoribosylamine. The following step requires the activation of glycine by the addition of a phosphate group from ATP. GAR synthetase performs the condensation of activated glycine onto PRPP, forming glycineamide ribonucleotide (GAR). GAR transformylase adds a formyl group onto the amino group of GAR, forming formylglycinamide ribonucleotide (FGAR). FGAR amidotransferase catalyzes the addition of a nitrogen group to FGAR, forming formylglycinamidine ribonucleotide (FGAM). FGAM cyclase catalyzes ring closure, which involves removal of a water molecule, forming the 5-membered imidazole ring 5-aminoimidazole ribonucleotide (AIR). N5-CAIR synthetase transfers a carboxyl group, forming the intermediate N5-carboxyaminoimidazole ribonucleotide (N5-CAIR). N5-CAIR mutase rearranges the carboxyl functional group and transfers it onto the imidazole ring, forming carboxyamino- imidazole ribonucleotide (CAIR). The two step mechanism of CAIR formation from AIR is mostly found in single celled organisms. Higher eukaryotes contain the enzyme AIR carboxylase, which transfers a carboxyl group directly to AIR imidazole ring, forming CAIR. SAICAR synthetase forms a peptide bond between aspartate and the added carboxyl group of the imidazole ring, forming N-succinyl-5-aminoimidazole-4-carboxamide ribonucleotide (SAICAR). SAICAR lyase removes the carbon skeleton of the added aspartate, leaving the amino group and forming 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR). AICAR transformylase transfers a carbonyl group to AICAR, forming N-formylaminoimidazole- 4-carboxamide ribonucleotide (FAICAR). The final step involves the enzyme IMP synthase, which performs the purine ring closure and forms the inosine monophosphate intermediate. Pyrimidine nucleotides Other DNA and RNA nucleotide bases that are linked to the ribose sugar via a glycosidic bond are thymine, cytosine and uracil (which is only found in RNA). Uridine monophosphate biosynthesis involves an enzyme that is located in the mitochondrial inner membrane and multifunctional enzymes that are located in the cytosol. The first step involves the enzyme carbamoyl phosphate synthase combining glutamine with CO2 in an ATP dependent reaction to form carbamoyl phosphate. Aspartate carbamoyltransferase condenses carbamoyl phosphate with aspartate to form uridosuccinate. Dihydroorotase performs ring closure, a reaction that loses water, to form dihydroorotate. Dihydroorotate dehydrogenase, located within the mitochondrial inner membrane, oxidizes dihydroorotate to orotate. Orotate phosphoribosyl hydrolase (OMP pyrophosphorylase) condenses orotate with PRPP to form orotidine-5'-phosphate. OMP decarboxylase catalyzes the conversion of orotidine-5'-phosphate to UMP. After the uridine nucleotide base is synthesized, the other bases, cytosine and thymine are synthesized. Cytosine biosynthesis is a two-step reaction which involves the conversion of UMP to UTP. Phosphate addition to UMP is catalyzed by a kinase enzyme. The enzyme CTP synthase catalyzes the next reaction step: the conversion of UTP to CTP by transferring an amino group from glutamine to uridine; this forms the cytosine base of CTP. The mechanism, which depicts the reaction UTP + ATP + glutamine ⇔ CTP + ADP + glutamate, is below: Cytosine is a nucleotide that is present in both DNA and RNA. However, uracil is only found in RNA. Therefore, after UTP is synthesized, it is must be converted into a deoxy form to be incorporated into DNA. This conversion involves the enzyme ribonucleoside triphosphate reductase. This reaction that removes the 2'-OH of the ribose sugar to generate deoxyribose is not affected by the bases attached to the sugar. This non-specificity allows ribonucleoside triphosphate reductase to convert all nucleotide triphosphates to deoxyribonucleotide by a similar mechanism. In contrast to uracil, thymine bases are found mostly in DNA, not RNA. Cells do not normally contain thymine bases that are linked to ribose sugars in RNA, thus indicating that cells only synthesize deoxyribose-linked thymine. The enzyme thymidylate synthetase is responsible for synthesizing thymine residues from dUMP to dTMP. This reaction transfers a methyl group onto the uracil base of dUMP to generate dTMP. The thymidylate synthase reaction, dUMP + 5,10-methylenetetrahydrofolate ⇔ dTMP + dihydrofolate, is shown to the right. DNA Although there are differences between eukaryotic and prokaryotic DNA synthesis, the following section denotes key characteristics of DNA replication shared by both organisms. DNA is composed of nucleotides that are joined by phosphodiester bonds. DNA synthesis, which takes place in the nucleus, is a semiconservative process, which means that the resulting DNA molecule contains an original strand from the parent structure and a new strand. DNA synthesis is catalyzed by a family of DNA polymerases that require four deoxynucleoside triphosphates, a template strand, and a primer with a free 3'OH in which to incorporate nucleotides. In order for DNA replication to occur, a replication fork is created by enzymes called helicases which unwind the DNA helix. Topoisomerases at the replication fork remove supercoils caused by DNA unwinding, and single-stranded DNA binding proteins maintain the two single-stranded DNA templates stabilized prior to replication. DNA synthesis is initiated by the RNA polymerase primase, which makes an RNA primer with a free 3'OH. This primer is attached to the single-stranded DNA template, and DNA polymerase elongates the chain by incorporating nucleotides; DNA polymerase also proofreads the newly synthesized DNA strand. During the polymerization reaction catalyzed by DNA polymerase, a nucleophilic attack occurs by the 3'OH of the growing chain on the innermost phosphorus atom of a deoxynucleoside triphosphate; this yields the formation of a phosphodiester bridge that attaches a new nucleotide and releases pyrophosphate. Two types of strands are created simultaneously during replication: the leading strand, which is synthesized continuously and grows towards the replication fork, and the lagging strand, which is made discontinuously in Okazaki fragments and grows away from the replication fork. Okazaki fragments are covalently joined by DNA ligase to form a continuous strand. Then, to complete DNA replication, RNA primers are removed, and the resulting gaps are replaced with DNA and joined via DNA ligase. Amino acids A protein is a polymer that is composed from amino acids that are linked by peptide bonds. There are more than 300 amino acids found in nature of which only twenty two, known as the proteinogenic amino acids, are the building blocks for protein. Only green plants and most microbes are able to synthesize all of the 20 standard amino acids that are needed by all living species. Mammals can only synthesize ten of the twenty standard amino acids. The other amino acids, valine, methionine, leucine, isoleucine, phenylalanine, lysine, threonine and tryptophan for adults and histidine, and arginine for babies are obtained through diet. Amino acid basic structure The general structure of the standard amino acids includes a primary amino group, a carboxyl group and the functional group attached to the α-carbon. The different amino acids are identified by the functional group. As a result of the three different groups attached to the α-carbon, amino acids are asymmetrical molecules. For all standard amino acids, except glycine, the α-carbon is a chiral center. In the case of glycine, the α-carbon has two hydrogen atoms, thus adding symmetry to this molecule. With the exception of proline, all of the amino acids found in life have the L-isoform conformation. Proline has a functional group on the α-carbon that forms a ring with the amino group. Nitrogen source One major step in amino acid biosynthesis involves incorporating a nitrogen group onto the α-carbon. In cells, there are two major pathways of incorporating nitrogen groups. One pathway involves the enzyme glutamine oxoglutarate aminotransferase (GOGAT) which removes the amide amino group of glutamine and transfers it onto 2-oxoglutarate, producing two glutamate molecules. In this catalysis reaction, glutamine serves as the nitrogen source. An image illustrating this reaction is found to the right. The other pathway for incorporating nitrogen onto the α-carbon of amino acids involves the enzyme glutamate dehydrogenase (GDH). GDH is able to transfer ammonia onto 2-oxoglutarate and form glutamate. Furthermore, the enzyme glutamine synthetase (GS) is able to transfer ammonia onto glutamate and synthesize glutamine, replenishing glutamine. The glutamate family of amino acids The glutamate family of amino acids includes the amino acids that derive from the amino acid glutamate. This family includes: glutamate, glutamine, proline, and arginine. This family also includes the amino acid lysine, which is derived from α-ketoglutarate. The biosynthesis of glutamate and glutamine is a key step in the nitrogen assimilation discussed above. The enzymes GOGAT and GDH catalyze the nitrogen assimilation reactions. In bacteria, the enzyme glutamate 5-kinase initiates the biosynthesis of proline by transferring a phosphate group from ATP onto glutamate. The next reaction is catalyzed by the enzyme pyrroline-5-carboxylate synthase (P5CS), which catalyzes the reduction of the ϒ-carboxyl group of L-glutamate 5-phosphate. This results in the formation of glutamate semialdehyde, which spontaneously cyclizes to pyrroline-5-carboxylate. Pyrroline-5-carboxylate is further reduced by the enzyme pyrroline-5-carboxylate reductase (P5CR) to yield a proline amino acid. In the first step of arginine biosynthesis in bacteria, glutamate is acetylated by transferring the acetyl group from acetyl-CoA at the N-α position; this prevents spontaneous cyclization. The enzyme N-acetylglutamate synthase (glutamate N-acetyltransferase) is responsible for catalyzing the acetylation step. Subsequent steps are catalyzed by the enzymes N-acetylglutamate kinase, N-acetyl-gamma-glutamyl-phosphate reductase, and acetylornithine/succinyldiamino pimelate aminotransferase and yield the N-acetyl-L-ornithine. The acetyl group of acetylornithine is removed by the enzyme acetylornithinase (AO) or ornithine acetyltransferase (OAT), and this yields ornithine. Then, the enzymes citrulline and argininosuccinate convert ornithine to arginine. There are two distinct lysine biosynthetic pathways: the diaminopimelic acid pathway and the α-aminoadipate pathway. The most common of the two synthetic pathways is the diaminopimelic acid pathway; it consists of several enzymatic reactions that add carbon groups to aspartate to yield lysine: Aspartate kinase initiates the diaminopimelic acid pathway by phosphorylating aspartate and producing aspartyl phosphate. Aspartate semialdehyde dehydrogenase catalyzes the NADPH-dependent reduction of aspartyl phosphate to yield aspartate semialdehyde. 4-hydroxy-tetrahydrodipicolinate synthase adds a pyruvate group to the β-aspartyl-4-semialdehyde, and a water molecule is removed. This causes cyclization and gives rise to (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate. 4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate by NADPH to yield Δ'-piperideine-2,6-dicarboxylate (2,3,4,5-tetrahydrodipicolinate) and H2O. Tetrahydrodipicolinate acyltransferase catalyzes the acetylation reaction that results in ring opening and yields N-acetyl α-amino-ε-ketopimelate. N-succinyl-α-amino-ε-ketopimelate-glutamate aminotransaminase catalyzes the transamination reaction that removes the keto group of N-acetyl α-amino-ε-ketopimelate and replaces it with an amino group to yield N-succinyl-L-diaminopimelate. N-acyldiaminopimelate deacylase catalyzes the deacylation of N-succinyl-L-diaminopimelate to yield L,L-diaminopimelate. DAP epimerase catalyzes the conversion of L,L-diaminopimelate to the meso form of L,L-diaminopimelate. DAP decarboxylase catalyzes the removal of the carboxyl group, yielding L-lysine. The serine family of amino acids The serine family of amino acid includes: serine, cysteine, and glycine. Most microorganisms and plants obtain the sulfur for synthesizing methionine from the amino acid cysteine. Furthermore, the conversion of serine to glycine provides the carbons needed for the biosynthesis of the methionine and histidine. During serine biosynthesis, the enzyme phosphoglycerate dehydrogenase catalyzes the initial reaction that oxidizes 3-phospho-D-glycerate to yield 3-phosphonooxypyruvate. The following reaction is catalyzed by the enzyme phosphoserine aminotransferase, which transfers an amino group from glutamate onto 3-phosphonooxypyruvate to yield L-phosphoserine. The final step is catalyzed by the enzyme phosphoserine phosphatase, which dephosphorylates L-phosphoserine to yield L-serine. There are two known pathways for the biosynthesis of glycine. Organisms that use ethanol and acetate as the major carbon source utilize the glyconeogenic pathway to synthesize glycine. The other pathway of glycine biosynthesis is known as the glycolytic pathway. This pathway converts serine synthesized from the intermediates of glycolysis to glycine. In the glycolytic pathway, the enzyme serine hydroxymethyltransferase catalyzes the cleavage of serine to yield glycine and transfers the cleaved carbon group of serine onto tetrahydrofolate, forming 5,10-methylene-tetrahydrofolate. Cysteine biosynthesis is a two-step reaction that involves the incorporation of inorganic sulfur. In microorganisms and plants, the enzyme serine acetyltransferase catalyzes the transfer of acetyl group from acetyl-CoA onto L-serine to yield O-acetyl-L-serine. The following reaction step, catalyzed by the enzyme O-acetyl serine (thiol) lyase, replaces the acetyl group of O-acetyl-L-serine with sulfide to yield cysteine. The aspartate family of amino acids The aspartate family of amino acids includes: threonine, lysine, methionine, isoleucine, and aspartate. Lysine and isoleucine are considered part of the aspartate family even though part of their carbon skeleton is derived from pyruvate. In the case of methionine, the methyl carbon is derived from serine and the sulfur group, but in most organisms, it is derived from cysteine. The biosynthesis of aspartate is a one step reaction that is catalyzed by a single enzyme. The enzyme aspartate aminotransferase catalyzes the transfer of an amino group from aspartate onto α-ketoglutarate to yield glutamate and oxaloacetate. Asparagine is synthesized by an ATP-dependent addition of an amino group onto aspartate; asparagine synthetase catalyzes the addition of nitrogen from glutamine or soluble ammonia to aspartate to yield asparagine. The diaminopimelic acid biosynthetic pathway of lysine belongs to the aspartate family of amino acids. This pathway involves nine enzyme-catalyzed reactions that convert aspartate to lysine. Aspartate kinase catalyzes the initial step in the diaminopimelic acid pathway by transferring a phosphoryl from ATP onto the carboxylate group of aspartate, which yields aspartyl-β-phosphate. Aspartate-semialdehyde dehydrogenase catalyzes the reduction reaction by dephosphorylation of aspartyl-β-phosphate to yield aspartate-β-semialdehyde. Dihydrodipicolinate synthase catalyzes the condensation reaction of aspartate-β-semialdehyde with pyruvate to yield dihydrodipicolinic acid. 4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of dihydrodipicolinic acid to yield tetrahydrodipicolinic acid. Tetrahydrodipicolinate N-succinyltransferase catalyzes the transfer of a succinyl group from succinyl-CoA on to tetrahydrodipicolinic acid to yield N-succinyl-L-2,6-diaminoheptanedioate. N-succinyldiaminopimelate aminotransferase catalyzes the transfer of an amino group from glutamate onto N-succinyl-L-2,6-diaminoheptanedioate to yield N-succinyl-L,L-diaminopimelic acid. Succinyl-diaminopimelate desuccinylase catalyzes the removal of acyl group from N-succinyl-L,L-diaminopimelic acid to yield L,L-diaminopimelic acid. Diaminopimelate epimerase catalyzes the inversion of the α-carbon of L,L-diaminopimelic acid to yield meso-diaminopimelic acid. Siaminopimelate decarboxylase catalyzes the final step in lysine biosynthesis that removes the carbon dioxide group from meso-diaminopimelic acid to yield L-lysine. Proteins Protein synthesis occurs via a process called translation. During translation, genetic material called mRNA is read by ribosomes to generate a protein polypeptide chain. This process requires transfer RNA (tRNA) which serves as an adaptor by binding amino acids on one end and interacting with mRNA at the other end; the latter pairing between the tRNA and mRNA ensures that the correct amino acid is added to the chain. Protein synthesis occurs in three phases: initiation, elongation, and termination. Prokaryotic (archaeal and bacterial) translation differs from eukaryotic translation; however, this section will mostly focus on the commonalities between the two organisms. Additional background Before translation can begin, the process of binding a specific amino acid to its corresponding tRNA must occur. This reaction, called tRNA charging, is catalyzed by aminoacyl tRNA synthetase. A specific tRNA synthetase is responsible for recognizing and charging a particular amino acid. Furthermore, this enzyme has special discriminator regions to ensure the correct binding between tRNA and its cognate amino acid. The first step for joining an amino acid to its corresponding tRNA is the formation of aminoacyl-AMP: {Amino~acid} + ATP <=> {aminoacyl-AMP} + PP_i This is followed by the transfer of the aminoacyl group from aminoacyl-AMP to a tRNA molecule. The resulting molecule is aminoacyl-tRNA: {Aminoacyl-AMP} + tRNA <=> {aminoacyl-tRNA} + AMP The combination of these two steps, both of which are catalyzed by aminoacyl tRNA synthetase, produces a charged tRNA that is ready to add amino acids to the growing polypeptide chain. In addition to binding an amino acid, tRNA has a three nucleotide unit called an anticodon that base pairs with specific nucleotide triplets on the mRNA called codons; codons encode a specific amino acid. This interaction is possible thanks to the ribosome, which serves as the site for protein synthesis. The ribosome possesses three tRNA binding sites: the aminoacyl site (A site), the peptidyl site (P site), and the exit site (E site). There are numerous codons within an mRNA transcript, and it is very common for an amino acid to be specified by more than one codon; this phenomenon is called degeneracy. In all, there are 64 codons, 61 of each code for one of the 20 amino acids, while the remaining codons specify chain termination. Translation in steps As previously mentioned, translation occurs in three phases: initiation, elongation, and termination. Step 1: Initiation The completion of the initiation phase is dependent on the following three events: 1. The recruitment of the ribosome to mRNA 2. The binding of a charged initiator tRNA into the P site of the ribosome 3. The proper alignment of the ribosome with mRNA's start codon Step 2: Elongation Following initiation, the polypeptide chain is extended via anticodon:codon interactions, with the ribosome adding amino acids to the polypeptide chain one at a time. The following steps must occur to ensure the correct addition of amino acids: 1. The binding of the correct tRNA into the A site of the ribosome 2. The formation of a peptide bond between the tRNA in the A site and the polypeptide chain attached to the tRNA in the P site 3. Translocation or advancement of the tRNA-mRNA complex by three nucleotides Translocation "kicks off" the tRNA at the E site and shifts the tRNA from the A site into the P site, leaving the A site free for an incoming tRNA to add another amino acid. Step 3: Termination The last stage of translation occurs when a stop codon enters the A site. Then, the following steps occur: 1. The recognition of codons by release factors, which causes the hydrolysis of the polypeptide chain from the tRNA located in the P site 2. The release of the polypeptide chain 3. The dissociation and "recycling" of the ribosome for future translation processes A summary table of the key players in translation is found below: Diseases associated with macromolecule deficiency Errors in biosynthetic pathways can have deleterious consequences including the malformation of macromolecules or the underproduction of functional molecules. Below are examples that illustrate the disruptions that occur due to these inefficiencies. Familial hypercholesterolemia: this disorder is characterized by the absence of functional receptors for LDL. Deficiencies in the formation of LDL receptors may cause faulty receptors which disrupt the endocytic pathway, inhibiting the entry of LDL into the liver and other cells. This causes a buildup of LDL in the blood plasma, which results in atherosclerotic plaques that narrow arteries and increase the risk of heart attacks. Lesch–Nyhan syndrome: this genetic disease is characterized by self- mutilation, mental deficiency, and gout. It is caused by the absence of hypoxanthine-guanine phosphoribosyltransferase, which is a necessary enzyme for purine nucleotide formation. The lack of enzyme reduces the level of necessary nucleotides and causes the accumulation of biosynthesis intermediates, which results in the aforementioned unusual behavior. Severe combined immunodeficiency (SCID): SCID is characterized by a loss of T cells. Shortage of these immune system components increases the susceptibility to infectious agents because the affected individuals cannot develop immunological memory. This immunological disorder results from a deficiency in adenosine deaminase activity, which causes a buildup of dATP. These dATP molecules then inhibit ribonucleotide reductase, which prevents of DNA synthesis. Huntington's disease: this neurological disease is caused from errors that occur during DNA synthesis. These errors or mutations lead to the expression of a mutant huntingtin protein, which contains repetitive glutamine residues that are encoded by expanding CAG trinucleotide repeats in the gene. Huntington's disease is characterized by neuronal loss and gliosis. Symptoms of the disease include: movement disorder, cognitive decline, and behavioral disorder.
Biology and health sciences
Metabolic processes
Biology
2385158
https://en.wikipedia.org/wiki/Glass%20frog
Glass frog
The glass frogs belong to the amphibian family Centrolenidae (order Anura), native to the Central American Rainforests. The general background coloration of most glass frogs is primarily lime green, the abdominal skin of some members of this family is transparent and translucent, giving the glass frog its common name. The internal viscera, including the heart, liver, and gastrointestinal tract, are visible through the skin. When active their blood makes them visible; when sleeping most of the blood is concealed in the liver, hiding them. Glass frogs are arboreal, living mainly in trees, feeding on small insects and only coming out for mating season. Their transparency conceals them very effectively when sleeping on a green leaf, as they habitually do. However, climate change and habitat fragmentation has been threatening the survival rates of the family. Taxonomy The first described species of Centrolenidae was the "giant" Centrolene geckoideum, named by Marcos Jiménez de la Espada in 1872, based on a specimen collected in northeastern Ecuador. Several species were described in subsequent years by different herpetologists (including G. A. Boulenger, G. K. Noble, and E. H. Taylor), but usually placed together with the tree frogs in the genera Hylella or Hyla. The family Centrolenidae was proposed by Edward H. Taylor in 1945. Between the 1950s and 1970s, most species of glass frogs were known from Central America, particularly from Costa Rica and Panama, where Taylor, Julia F., and Jay M. Savage extensively worked, and just a few species were known to occur in South America. In 1973, John D. Lynch and William E. Duellman published a large revision of the glass frogs from Ecuador, showing the species richness of Centrolenidae was particularly concentrated in the Andes. Later contributions by authors such as Juan Rivero, Savage, William Duellman, John D. Lynch, Pedro Ruiz-Carranza, and José Ayarzagüena increased the number of described taxa, especially from Central America, Venezuela, Colombia, Ecuador, and Peru. The evolutionary relationships, biogeography, and character evolution of centrolenidae were discussed by Guayasamin et al. (2008) Glass frogs originated in South America and dispersed multiple times into Central America. Character evolution seems to be complex, with multiple gains and/or losses of humeral spines, reduced hand webbing, and complete ventral transparency. Researched by Santiago (2009), evolution and speciation on glass frogs has shown that ecological gradient and isolation have a role in speciation and divergence in glass frogs by comparing glass frogs Mitochondrial DNA. Glass frogs have expanded from the Guiana shield to other rainforest and diversified further. They evolved to be able to survive and fit in with their environment. The taxonomical classification of the glass frogs has been problematic. In 1991, after a major revision of the species and taxonomic characters, the herpetologists Pedro Ruiz-Carranza and John D. Lynch published a proposal for a taxonomic classification of the Centrolenidae based on cladistic principles and defining monophyletic groups. That paper was the first of a series of contributions dealing with the glass frogs from Colombia that led them to describe almost 50 species of glass frogs. The genus Centrolene was proposed to include the species with a humeral spine in adult males, and the genus Hyalinobatrachium to include the species with a bulbous liver. However, they left a heterogeneous group of species in the genus Cochranella, defined just by lacking a humeral spine and a bulbous liver. Since the publication of the extensive revision of the Colombian glass frogs, several other publications have dealt with the glass frogs from Venezuela, Costa Rica, and Ecuador. In 2006, the genus Nymphargus was erected for the species with basal webbing among outer fingers (part of the previous Cochranella ocellata species group). Four genera (Centrolene, Cochranella, Hyalinobatrachium, Nymphargus) have been shown to be poly- or paraphyletic and recently a new taxonomy has been proposed (see below). Classification The family Centrolenidae is a clade of anurans. Previously, the family was considered closely related to the family Hylidae; however, recent phylogenetic studies have placed them (and their sister taxon, the family Allophrynidae) closer to the family Leptodactylidae. The monophyly of Centrolenidae is supported by morphological and behavioral characters, including: 1) presence of a dilated process on the medial side of the third metacarpal (an apparently unique synapomorphy); 2) ventral origin of the musculus flexor teres digiti III relative to the musculus transversi metacarpi I; 3) terminal phalanges T-shaped; 4) exotroph, lotic, burrower/fossorial tadpoles with a vermiform body and dorsal C-shaped eyes, that live buried within leaf packs in still or flowing water systems; and 5) eggs clutches deposited outside of water on vegetation or rocks above still or flowing water systems. Several molecular synapomorphies also support the monophyly of the clade. The taxonomic classification of the Centrolenidae was recently modified. The family now contains two subfamilies and 12 genera. Genera Subfamily Centroleninae Genus Centrolene Jiménez de la Espada, 1872 Genus Chimerella Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Genus Cochranella Taylor, 1951 Genus Espadarana Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Genus Nymphargus Cisneros-Heredia & McDiarmid, 2007 Genus Rulyrana Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Genus Sachatamia Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Genus Teratohyla Taylor, 1951 Genus Vitreorana Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Genus incertae sedis "Centrolene" acanthidiocephalum (Ruiz-Carranza and Lynch, 1989) "Centrolene" azulae (Flores and McDiarmid, 1989) "Centrolene" guanacarum Ruiz-Carranza and Lynch, 1995 "Centrolene" medemi (Cochran and Goin, 1970) "Centrolene" petrophilum Ruiz-Carranza and Lynch, 1991 "Centrolene" quindianum Ruiz-Carranza and Lynch, 1995 "Centrolene" robledoi Ruiz-Carranza and Lynch, 1995 "Cochranella" duidaeana (Ayarzagüena, 1992) "Cochranella" euhystrix (Cadle and McDiarmid, 1990) "Cochranella" geijskesi (Goin, 1966) "Cochranella" megista (Rivero, 1985) "Cochranella" ramirezi Ruiz-Carranza and Lynch, 1991 "Cochranella" riveroi (Ayarzagüena, 1992) "Cochranella" xanthocheridia Ruiz-Carranza and Lynch, 1995 Subfamily Hyalinobatrachinae Genus Celsiella Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Genus Hyalinobatrachium Ruiz-Carranza & Lynch, 1991 – "True" Glass Frogs Subfamily incertae sedis Ikakogi Guayasamin, Castroviejo, Trueb, Ayarzagüena, Rada, Vilá, 2009 Camouflage The evolutionary advantage of a partly clear skin and an opaque back was a mystery, as it did not seem to be effective as camouflage. It was found that the colour of the frog's body changed little against darker or lighter foliage, but the legs were more translucent and consequently changed in brightness. By resting with the translucent legs surrounding the body, the frog's edge appears softer, with less brightness gradient from the leaf to the legs and from the legs to the body, making the outline less noticeable. This camouflage phenomenon, in which the frog's edges are softened to match the relative brightness of its surroundings, is referred to as edge diffusion. Herpetology researchers study the pros and cons of transparency in glass frogs, it was established that the transparency offers more than regular color changes in the skin itself through limited pigments. Experiments with computer-generated images and gelatine models of opaque and translucent frogs found that the translucent frogs were less visible, and were attacked by birds significantly less often. Photographs of the frogs were taken both at nighttime and during the day, results showed little to no visibility of the frogs on any leaves in the daytime or nighttime. It was found in 2022 that these frogs have the ability to conceal red blood cells concentrated inside their livers, increasing transparency when they are vulnerable. While this would cause massive clotting in most animals (including humans), glass frogs are able to regulate the location, density, and packing of red cells without clotting. The findings could advance medical understanding of dangerous blood clotting. Characteristics Glass frogs are generally small, ranging from in length. They appear light green in color over most of their bodies, except for the skin along the lower surface of the body and legs, which are transparent or translucent. The glass frog's transparent skin allows an external view of the viscera—the internal organs present in the body's main cavity—making it so observers can witness the frog's internal processes, such as the heart beating and pumping blood through its arteries. Patterning of glass frogs is varied amongst different species, while some appear as a uniform green color, others display spots that range from yellow to white, mimicking the coloration of their eggs. Their digit tips are expanded, allowing them to climb, thus allowing most to live in elevated areas along forest streams, such as trees and shrubs. Glass frogs are similar in appearance to some green frogs of the genus Eleutherodactylus and to some tree frogs of the family Hylidae. However, hylid tree frogs have eyes that face to the side, whilst those of glass frogs face forward. Two members of the glass-frog family Centrolenidae: Centrolenella fleischmanni, now called Hyalinobatrachium fleischmanni, and C. prosoblepon, and of the hylid subfamily Phyllomedusinae: Agalychnis moreletii and Pachymedusa dacnicolor, reflect near-infrared light (700 to 900 nanometers) when examined by infrared color photography. Infrared reflectance may confer adaptive advantage to these arboreal frogs both in thermoregulation and infrared cryptic coloration. An endangered species of glass frog found in Peru was compared to the N. mixomaculatus species and the following results were recorded: no humeral spine, no webbed fingers between II and III, finger I shorter than II, no vomerine teeth, no ulnar and tarsal tubercles or folds, no white pigment in the visceral or hepatic peritonea, and differing coloration and spots. Lifecycle Mating Mating begins by the call of a male tree frog, who is perched either on the underside or top of a leaf over a lake edge or a stream. Once a female has responded to the male's call, mating begins on the leaf in the amplexus physical position, in which the male wraps his arms around the female and attaches himself to her back. Once the physical mating process has concluded, the female produces her eggs onto the leaf before departing, leaving the male to defend the newly-laid eggs against predators. Centrolenidae is a species that has long-term parental care, males guard the clutch for various days after the eggs are laid. Environmental aspects also play into the amount of time the male glass frog tends to the young, such as rainfall or wind. Female post-oviposition care is most often based on body conditions, whether or not she is able to fend for herself will tell how long after her eggs are laid that she will remain by the clutch. Males will occasionally call for and mate with other females on the same leaf, establishing a multitude of different developmentally-staged egg clutches to guard. Tadpoles Once the tadpoles, the frog aquatic larval stage, have been hatched, they fall from their original position on the leaf into the water below. When living in the water the tadpoles feed on the leaf litter and streamside detritus until undergoing metamorphosis to become a froglet. Conservation Predators A main predator on the glass frog in its tadpole stage are "frog flies", which lay their eggs within the frog eggs; after hatching the maggots feed on the embryos of the glass frogs. Glass frog behaviors to avoid predation vary from species to species as well as circumstances. Hyalinobatrachium iaspidiense was observed having a flattened body posture to avoid predation, after disturbing the frog it propped up into a sitting position. Another male H. iaspidiense was observed protecting an egg clutch with a body positioning of extending all limbs and lifting its body from the leaf. Protection All glass frogs are protected under the Convention on International Trade in Endangered Species (CITES) meaning that international trade (including in parts and derivatives) is regulated by the CITES permitting system. Distribution The Centrolenidae are a diverse family, distributed from southern Mexico to Panama, and through the Andes from Venezuela and the island of Tobago to Bolivia, with some species in the Amazon and Orinoco River basins, the Guiana Shield region, southeastern Brazil, and northern Argentina. The biggest threats they have are deforestation, invasive species, pollution, habitat loss and illegal pet trade. These many threats have led to a decline in the population of this species. Biology Glass frogs are mostly arboreal. They live along rivers and streams during the breeding season, and are particularly diverse in montane cloud forests of Central and South America, although some species occur also in Amazon and Chocóan rainforest and semideciduous forests. Hyalinobatrachium valerioi glass frogs are carnivores, their diet mainly including small insects like crickets, moths, flies, spiders, and other smaller frogs. The eggs are usually deposited on the leaves of trees or shrubs hanging over the running water of mountain streams, creeks, and small rivers. One species leaves its eggs over stones close to waterfalls. The method of egg-laying on the leaf varies between species. The males usually call from leaves close to their egg clutches. These eggs are less vulnerable to predators than those laid within water, but are affected by the parasitic maggots of some fly species. Some glass frogs show parental care: in many species, glass frog females brood their eggs during the night the eggs are fertilized, which improves the survival of the eggs, while in almost a third of species, glass frog males stay on guard for much longer periods. After they hatch, the tadpoles fall into the waters below. The tadpoles are elongated, with powerful tails and low fins, suited for fast-flowing water. Outside of the breeding season, some species live in the canopy. The majority of amphibians use cutaneous respiration, or the process of breathing through the skin. Due to the importance of the skin, amphibians are very sensitive to what goes through their permeable skin, the stratum corneum is the main skin barrier which is much thinner than other classes such as mammals or birds. Chemicals and high amounts of elements in water or rainfall may disturb frogs’ health and possibly lives.
Biology and health sciences
Frogs and toads
Animals
19021953
https://en.wikipedia.org/wiki/Fermat%27s%20Last%20Theorem
Fermat's Last Theorem
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers , , and satisfy the equation for any integer value of greater than . The cases and have been known since antiquity to have infinitely many solutions. The proposition was first stated as a theorem by Pierre de Fermat around 1637 in the margin of a copy of Arithmetica. Fermat added that he had a proof that was too large to fit in the margin. Although other statements claimed by Fermat without proof were subsequently proven by others and credited as theorems of Fermat (for example, Fermat's theorem on sums of two squares), Fermat's Last Theorem resisted proof, leading to doubt that Fermat ever had a correct proof. Consequently, the proposition became known as a conjecture rather than a theorem. After 358 years of effort by mathematicians, the first successful proof was released in 1994 by Andrew Wiles and formally published in 1995. It was described as a "stunning advance" in the citation for Wiles's Abel Prize award in 2016. It also proved much of the Taniyama–Shimura conjecture, subsequently known as the modularity theorem, and opened up entire new approaches to numerous other problems and mathematically powerful modularity lifting techniques. The unsolved problem stimulated the development of algebraic number theory in the 19th and 20th centuries. It is among the most notable theorems in the history of mathematics and prior to its proof was in the Guinness Book of World Records as the "most difficult mathematical problem", in part because the theorem has the largest number of unsuccessful proofs. Overview Pythagorean origins The Pythagorean equation, , has an infinite number of positive integer solutions for x, y, and z; these solutions are known as Pythagorean triples (with the simplest example being 3, 4, 5). Around 1637, Fermat wrote in the margin of a book that the more general equation had no solutions in positive integers if n is an integer greater than 2. Although he claimed to have a general proof of his conjecture, Fermat left no details of his proof, and none has ever been found. His claim was discovered some 30 years later, after his death. This claim, which came to be known as Fermat's Last Theorem, stood unsolved for the next three and a half centuries. The claim eventually became one of the most notable unsolved problems of mathematics. Attempts to prove it prompted substantial development in number theory, and over time Fermat's Last Theorem gained prominence as an unsolved problem in mathematics. Subsequent developments and solution The special case , proved by Fermat himself, is sufficient to establish that if the theorem is false for some exponent n that is not a prime number, it must also be false for some smaller n, so only prime values of n need further investigation. Over the next two centuries (1637–1839), the conjecture was proved for only the primes 3, 5, and 7, although Sophie Germain innovated and proved an approach that was relevant to an entire class of primes. In the mid-19th century, Ernst Kummer extended this and proved the theorem for all regular primes, leaving irregular primes to be analyzed individually. Building on Kummer's work and using sophisticated computer studies, other mathematicians were able to extend the proof to cover all prime exponents up to four million, but a proof for all exponents was inaccessible (meaning that mathematicians generally considered a proof impossible, exceedingly difficult, or unachievable with current knowledge). Separately, around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama suspected a link might exist between elliptic curves and modular forms, two completely different areas of mathematics. Known at the time as the Taniyama–Shimura conjecture (eventually as the modularity theorem), it stood on its own, with no apparent connection to Fermat's Last Theorem. It was widely seen as significant and important in its own right, but was (like Fermat's theorem) widely considered completely inaccessible to proof. In 1984, Gerhard Frey noticed an apparent link between these two previously unrelated and unsolved problems. An outline suggesting this could be proved was given by Frey. The full proof that the two problems were closely linked was accomplished in 1986 by Ken Ribet, building on a partial proof by Jean-Pierre Serre, who proved all but one part known as the "epsilon conjecture" (see: Ribet's Theorem and Frey curve). These papers by Frey, Serre and Ribet showed that if the Taniyama–Shimura conjecture could be proven for at least the semi-stable class of elliptic curves, a proof of Fermat's Last Theorem would also follow automatically. The connection is described below: any solution that could contradict Fermat's Last Theorem could also be used to contradict the Taniyama–Shimura conjecture. So if the modularity theorem were found to be true, then by definition, no solution contradicting Fermat's Last Theorem could exist, meaning that Fermat's Last Theorem must also be true. Although both problems were daunting and widely considered to be "completely inaccessible" to proof at the time, this was the first suggestion of a route by which Fermat's Last Theorem could be extended and proved for all numbers, not just some numbers. Unlike Fermat's Last Theorem, the Taniyama–Shimura conjecture was a major active research area and viewed as more within reach of contemporary mathematics. However, general opinion was that this simply showed the impracticality of proving the Taniyama–Shimura conjecture. Mathematician John Coates' quoted reaction was a common one: On hearing that Ribet had proven Frey's link to be correct, English mathematician Andrew Wiles, who had a childhood fascination with Fermat's Last Theorem and had a background of working with elliptic curves and related fields, decided to try to prove the Taniyama–Shimura conjecture as a way to prove Fermat's Last Theorem. In 1993, after six years of working secretly on the problem, Wiles succeeded in proving enough of the conjecture to prove Fermat's Last Theorem. Wiles's paper was massive in size and scope. A flaw was discovered in one part of his original paper during peer review and required a further year and collaboration with a past student, Richard Taylor, to resolve. As a result, the final proof in 1995 was accompanied by a smaller joint paper showing that the fixed steps were valid. Wiles's achievement was reported widely in the popular press, and was popularized in books and television programs. The remaining parts of the Taniyama–Shimura–Weil conjecture, now proven and known as the modularity theorem, were subsequently proved by other mathematicians, who built on Wiles's work between 1996 and 2001. For his proof, Wiles was honoured and received numerous awards, including the 2016 Abel Prize. Equivalent statements of the theorem There are several alternative ways to state Fermat's Last Theorem that are mathematically equivalent to the original statement of the problem. In order to state them, we use the following notations: let be the set of natural numbers 1, 2, 3, ..., let be the set of integers 0, ±1, ±2, ..., and let be the set of rational numbers , where and are in with . In what follows we will call a solution to where one or more of , , or is zero a trivial solution. A solution where all three are nonzero will be called a non-trivial solution. For comparison's sake we start with the original formulation. Original statement. With , , , ∈ (meaning that n, x, y, z are all positive whole numbers) and , the equation has no solutions. Most popular treatments of the subject state it this way. It is also commonly stated over : Equivalent statement 1: , where integer ≥ 3, has no non-trivial solutions , , ∈ . The equivalence is clear if is even. If is odd and all three of are negative, then we can replace with to obtain a solution in . If two of them are negative, it must be and or and . If are negative and is positive, then we can rearrange to get resulting in a solution in ; the other case is dealt with analogously. Now if just one is negative, it must be or . If is negative, and and are positive, then it can be rearranged to get again resulting in a solution in ; if is negative, the result follows symmetrically. Thus in all cases a nontrivial solution in would also mean a solution exists in , the original formulation of the problem. Equivalent statement 2: , where integer , has no non-trivial solutions . This is because the exponents of and are equal (to ), so if there is a solution in , then it can be multiplied through by an appropriate common denominator to get a solution in , and hence in . Equivalent statement 3: , where integer , has no non-trivial solutions . A non-trivial solution , , ∈ to yields the non-trivial solution , ∈ for . Conversely, a solution , ∈ to yields the non-trivial solution for . This last formulation is particularly fruitful, because it reduces the problem from a problem about surfaces in three dimensions to a problem about curves in two dimensions. Furthermore, it allows working over the field , rather than over the ring ; fields exhibit more structure than rings, which allows for deeper analysis of their elements. Equivalent statement 4 – connection to elliptic curves: If , , is a non-trivial solution to , odd prime, then (Frey curve) will be an elliptic curve without a modular form. Examining this elliptic curve with Ribet's theorem shows that it does not have a modular form. However, the proof by Andrew Wiles proves that any equation of the form does have a modular form. Any non-trivial solution to (with an odd prime) would therefore create a contradiction, which in turn proves that no non-trivial solutions exist. In other words, any solution that could contradict Fermat's Last Theorem could also be used to contradict the modularity theorem. So if the modularity theorem were found to be true, then it would follow that no contradiction to Fermat's Last Theorem could exist either. As described above, the discovery of this equivalent statement was crucial to the eventual solution of Fermat's Last Theorem, as it provided a means by which it could be "attacked" for all numbers at once. Mathematical history Pythagoras and Diophantus Pythagorean triples In ancient times it was known that a triangle whose sides were in the ratio 3:4:5 would have a right angle as one of its angles. This was used in construction and later in early geometry. It was also known to be one example of a general rule that any triangle where the length of two sides, each squared and then added together , equals the square of the length of the third side , would also be a right angle triangle. This is now known as the Pythagorean theorem, and a triple of numbers that meets this condition is called a Pythagorean triple; both are named after the ancient Greek Pythagoras. Examples include (3, 4, 5) and (5, 12, 13). There are infinitely many such triples, and methods for generating such triples have been studied in many cultures, beginning with the Babylonians and later ancient Greek, Chinese, and Indian mathematicians. Mathematically, the definition of a Pythagorean triple is a set of three integers that satisfy the equation . Diophantine equations Fermat's equation, with positive integer solutions, is an example of a Diophantine equation, named for the 3rd-century Alexandrian mathematician, Diophantus, who studied them and developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two given numbers A and B, respectively: Diophantus's major work is the Arithmetica, of which only a portion has survived. Fermat's conjecture of his Last Theorem was inspired while reading a new edition of the Arithmetica, that was translated into Latin and published in 1621 by Claude Bachet. Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation are given by the Pythagorean triples, originally solved by the Babylonians (). Solutions to linear Diophantine equations, such as , may be found using the Euclidean algorithm (c. 5th century BC). Many Diophantine equations have a form similar to the equation of Fermat's Last Theorem from the point of view of algebra, in that they have no cross terms mixing two letters, without sharing its particular properties. For example, it is known that there are infinitely many positive integers x, y, and z such that , where n and m are relatively prime natural numbers. Fermat's conjecture Problem II.8 of the asks how a given square number is split into two other squares; in other words, for a given rational number k, find rational numbers u and v such that . Diophantus shows how to solve this sum-of-squares problem for (the solutions being and ). Around 1637, Fermat wrote his Last Theorem in the margin of his copy of the next to Diophantus's sum-of-squares problem: After Fermat's death in 1665, his son Clément-Samuel Fermat produced a new edition of the book (1670) augmented with his father's comments. Although not actually a theorem at the time (meaning a mathematical statement for which proof exists), the marginal note became known over time as Fermat's Last Theorem, as it was the last of Fermat's asserted theorems to remain unproved. It is not known whether Fermat had actually found a valid proof for all exponents n, but it appears unlikely. Only one related proof by him has survived, namely for the case , as described in the section . While Fermat posed the cases of and of as challenges to his mathematical correspondents, such as Marin Mersenne, Blaise Pascal, and John Wallis, he never posed the general case. Moreover, in the last thirty years of his life, Fermat never again wrote of his "truly marvelous proof" of the general case, and never published it. Van der Poorten suggests that while the absence of a proof is insignificant, the lack of challenges means Fermat realised he did not have a proof; he quotes Weil as saying Fermat must have briefly deluded himself with an irretrievable idea. The techniques Fermat might have used in such a "marvelous proof" are unknown. Wiles and Taylor's proof relies on 20th-century techniques. Fermat's proof would have had to be elementary by comparison, given the mathematical knowledge of his time. While Harvey Friedman's grand conjecture implies that any provable theorem (including Fermat's last theorem) can be proved using only 'elementary function arithmetic', such a proof need be 'elementary' only in a technical sense and could involve millions of steps, and thus be far too long to have been Fermat's proof. Proofs for specific exponents Exponent = 4 Only one relevant proof by Fermat has survived, in which he uses the technique of infinite descent to show that the area of a right triangle with integer sides can never equal the square of an integer. His proof is equivalent to demonstrating that the equation has no primitive solutions in integers (no pairwise coprime solutions). In turn, this proves Fermat's Last Theorem for the case , since the equation can be written as . Alternative proofs of the case were developed later by Frénicle de Bessy (1676), Leonhard Euler (1738), Kausler (1802), Peter Barlow (1811), Adrien-Marie Legendre (1830), Schopis (1825), Olry Terquem (1846), Joseph Bertrand (1851), Victor Lebesgue (1853, 1859, 1862), Théophile Pépin (1883), Tafelmacher (1893), David Hilbert (1897), Bendz (1901), Gambioli (1901), Leopold Kronecker (1901), Bang (1905), Sommer (1907), Bottari (1908), Karel Rychlík (1910), Nutzhorn (1912), Robert Carmichael (1913), Hancock (1931), Gheorghe Vrănceanu (1966), Grant and Perella (1999), Barbara (2007), and Dolan (2011). Other exponents After Fermat proved the special case , the general proof for all n required only that the theorem be established for all odd prime exponents. In other words, it was necessary to prove only that the equation has no positive integer solutions when n is an odd prime number. This follows because a solution for a given n is equivalent to a solution for all the factors of n. For illustration, let n be factored into d and e, n = de. The general equation an + bn = cn implies that is a solution for the exponent e (ad)e + (bd)e = (cd)e. Thus, to prove that Fermat's equation has no solutions for , it would suffice to prove that it has no solutions for at least one prime factor of every n. Each integer is divisible by 4 or by an odd prime number (or both). Therefore, Fermat's Last Theorem could be proved for all n if it could be proved for and for all odd primes p. In the two centuries following its conjecture (1637–1839), Fermat's Last Theorem was proved for three odd prime exponents p = 3, 5 and 7. The case was first stated by Abu-Mahmud Khojandi (10th century), but his attempted proof of the theorem was incorrect. In 1770, Leonhard Euler gave a proof of p = 3, but his proof by infinite descent contained a major gap. However, since Euler himself had proved the lemma necessary to complete the proof in other work, he is generally credited with the first proof. Independent proofs were published by Kausler (1802), Legendre (1823, 1830), Calzolari (1855), Gabriel Lamé (1865), Peter Guthrie Tait (1872), Siegmund Günther (1878), Gambioli (1901), Krey (1909), Rychlík (1910), Stockhaus (1910), Carmichael (1915), Johannes van der Corput (1915), Axel Thue (1917), and Duarte (1944). The case was proved independently by Legendre and Peter Gustav Lejeune Dirichlet around 1825. Alternative proofs were developed by Carl Friedrich Gauss (1875, posthumous), Lebesgue (1843), Lamé (1847), Gambioli (1901), Werebrusow (1905), Rychlík (1910), van der Corput (1915), and Guy Terjanian (1987). The case was proved by Lamé in 1839. His rather complicated proof was simplified in 1840 by Lebesgue, and still simpler proofs were published by Angelo Genocchi in 1864, 1874 and 1876. Alternative proofs were developed by Théophile Pépin (1876) and Edmond Maillet (1897). Fermat's Last Theorem was also proved for the exponents n = 6, 10, and 14. Proofs for were published by Kausler, Thue, Tafelmacher, Lind, Kapferer, Swift, and Breusch. Similarly, Dirichlet and Terjanian each proved the case n = 14, while Kapferer and Breusch each proved the case n = 10. Strictly speaking, these proofs are unnecessary, since these cases follow from the proofs for n = 3, 5, and 7, respectively. Nevertheless, the reasoning of these even-exponent proofs differs from their odd-exponent counterparts. Dirichlet's proof for n = 14 was published in 1832, before Lamé's 1839 proof for . All proofs for specific exponents used Fermat's technique of infinite descent, either in its original form, or in the form of descent on elliptic curves or abelian varieties. The details and auxiliary arguments, however, were often ad hoc and tied to the individual exponent under consideration. Since they became ever more complicated as p increased, it seemed unlikely that the general case of Fermat's Last Theorem could be proved by building upon the proofs for individual exponents. Although some general results on Fermat's Last Theorem were published in the early 19th century by Niels Henrik Abel and Peter Barlow, the first significant work on the general theorem was done by Sophie Germain. Early modern breakthroughs Sophie Germain In the early 19th century, Sophie Germain developed several novel approaches to prove Fermat's Last Theorem for all exponents. First, she defined a set of auxiliary primes θ constructed from the prime exponent p by the equation , where h is any integer not divisible by three. She showed that, if no integers raised to the pth power were adjacent modulo θ (the non-consecutivity condition), then θ must divide the product xyz. Her goal was to use mathematical induction to prove that, for any given p, infinitely many auxiliary primes θ satisfied the non-consecutivity condition and thus divided xyz; since the product xyz can have at most a finite number of prime factors, such a proof would have established Fermat's Last Theorem. Although she developed many techniques for establishing the non-consecutivity condition, she did not succeed in her strategic goal. She also worked to set lower limits on the size of solutions to Fermat's equation for a given exponent p, a modified version of which was published by Adrien-Marie Legendre. As a byproduct of this latter work, she proved Sophie Germain's theorem, which verified the first case of Fermat's Last Theorem (namely, the case in which p does not divide xyz) for every odd prime exponent less than 270, and for all primes p such that at least one of , , , , and is prime (specially, the primes p such that is prime are called Sophie Germain primes). Germain tried unsuccessfully to prove the first case of Fermat's Last Theorem for all even exponents, specifically for , which was proved by Guy Terjanian in 1977. In 1985, Leonard Adleman, Roger Heath-Brown and Étienne Fouvry proved that the first case of Fermat's Last Theorem holds for infinitely many odd primes p. Ernst Kummer and the theory of ideals In 1847, Gabriel Lamé outlined a proof of Fermat's Last Theorem based on factoring the equation in complex numbers, specifically the cyclotomic field based on the roots of the number 1. His proof failed, however, because it assumed incorrectly that such complex numbers can be factored uniquely into primes, similar to integers. This gap was pointed out immediately by Joseph Liouville, who later read a paper that demonstrated this failure of unique factorisation, written by Ernst Kummer. Kummer set himself the task of determining whether the cyclotomic field could be generalized to include new prime numbers such that unique factorisation was restored. He succeeded in that task by developing the ideal numbers. (It is often stated that Kummer was led to his "ideal complex numbers" by his interest in Fermat's Last Theorem; there is even a story often told that Kummer, like Lamé, believed he had proven Fermat's Last Theorem until Lejeune Dirichlet told him his argument relied on unique factorization; but the story was first told by Kurt Hensel in 1910 and the evidence indicates it likely derives from a confusion by one of Hensel's sources. Harold Edwards said the belief that Kummer was mainly interested in Fermat's Last Theorem "is surely mistaken". See the history of ideal numbers.) Using the general approach outlined by Lamé, Kummer proved both cases of Fermat's Last Theorem for all regular prime numbers. However, he could not prove the theorem for the exceptional primes (irregular primes) that conjecturally occur approximately 39% of the time; the only irregular primes below 270 are 37, 59, 67, 101, 103, 131, 149, 157, 233, 257 and 263. Mordell conjecture In the 1920s, Louis Mordell posed a conjecture that implied that Fermat's equation has at most a finite number of nontrivial primitive integer solutions, if the exponent n is greater than two. This conjecture was proved in 1983 by Gerd Faltings, and is now known as Faltings's theorem. Computational studies In the latter half of the 20th century, computational methods were used to extend Kummer's approach to the irregular primes. In 1954, Harry Vandiver used a SWAC computer to prove Fermat's Last Theorem for all primes up to 2521. By 1978, Samuel Wagstaff had extended this to all primes less than 125,000. By 1993, Fermat's Last Theorem had been proved for all primes less than four million. However, despite these efforts and their results, no proof existed of Fermat's Last Theorem. Proofs of individual exponents by their nature could never prove the general case: even if all exponents were verified up to an extremely large number X, a higher exponent beyond X might still exist for which the claim was not true. (This had been the case with some other past conjectures, such as with Skewes' number, and it could not be ruled out in this conjecture.) Connection with elliptic curves The strategy that ultimately led to a successful proof of Fermat's Last Theorem arose from the "astounding" Taniyama–Shimura–Weil conjecture, proposed around 1955—which many mathematicians believed would be near to impossible to prove, and was linked in the 1980s by Gerhard Frey, Jean-Pierre Serre and Ken Ribet to Fermat's equation. By accomplishing a partial proof of this conjecture in 1994, Andrew Wiles ultimately succeeded in proving Fermat's Last Theorem, as well as leading the way to a full proof by others of what is now known as the modularity theorem. Taniyama–Shimura–Weil conjecture Around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama observed a possible link between two apparently completely distinct branches of mathematics, elliptic curves and modular forms. The resulting modularity theorem (at the time known as the Taniyama–Shimura conjecture) states that every elliptic curve is modular, meaning that it can be associated with a unique modular form. The link was initially dismissed as unlikely or highly speculative, but was taken more seriously when number theorist André Weil found evidence supporting it, though not proving it; as a result the conjecture was often known as the Taniyama–Shimura–Weil conjecture. Even after gaining serious attention, the conjecture was seen by contemporary mathematicians as extraordinarily difficult or perhaps inaccessible to proof. For example, Wiles's doctoral supervisor John Coates states that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible", adding that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]." Ribet's theorem for Frey curves In 1984, Gerhard Frey noted a link between Fermat's equation and the modularity theorem, then still a conjecture. If Fermat's equation had any solution for exponent , then it could be shown that the semi-stable elliptic curve (now known as a Frey-Hellegouarch) y2 = would have such unusual properties that it was unlikely to be modular. This would conflict with the modularity theorem, which asserted that all elliptic curves are modular. As such, Frey observed that a proof of the Taniyama–Shimura–Weil conjecture might also simultaneously prove Fermat's Last Theorem. By contraposition, a disproof or refutation of Fermat's Last Theorem would disprove the Taniyama–Shimura–Weil conjecture. In plain English, Frey had shown that, if this intuition about his equation was correct, then any set of four numbers (a, b, c, n) capable of disproving Fermat's Last Theorem, could also be used to disprove the Taniyama–Shimura–Weil conjecture. Therefore, if the latter were true, the former could not be disproven, and would also have to be true. Following this strategy, a proof of Fermat's Last Theorem required two steps. First, it was necessary to prove the modularity theorem, or at least to prove it for the types of elliptical curves that included Frey's equation (known as semistable elliptic curves). This was widely believed inaccessible to proof by contemporary mathematicians. Second, it was necessary to show that Frey's intuition was correct: that if an elliptic curve were constructed in this way, using a set of numbers that were a solution of Fermat's equation, the resulting elliptic curve could not be modular. Frey showed that this was plausible but did not go as far as giving a full proof. The missing piece (the so-called "epsilon conjecture", now known as Ribet's theorem) was identified by Jean-Pierre Serre who also gave an almost-complete proof and the link suggested by Frey was finally proved in 1986 by Ken Ribet. Following Frey, Serre and Ribet's work, this was where matters stood: Fermat's Last Theorem needed to be proven for all exponents n that were prime numbers. The modularity theorem—if proved for semi-stable elliptic curves—would mean that all semistable elliptic curves must be modular. Ribet's theorem showed that any solution to Fermat's equation for a prime number could be used to create a semistable elliptic curve that could not be modular; The only way that both of these statements could be true, was if no solutions existed to Fermat's equation (because then no such curve could be created), which was what Fermat's Last Theorem said. As Ribet's Theorem was already proved, this meant that a proof of the modularity theorem would automatically prove Fermat's Last theorem was true as well. Wiles's general proof Ribet's proof of the epsilon conjecture in 1986 accomplished the first of the two goals proposed by Frey. Upon hearing of Ribet's success, Andrew Wiles, an English mathematician with a childhood fascination with Fermat's Last Theorem, and who had worked on elliptic curves, decided to commit himself to accomplishing the second half: proving a special case of the modularity theorem (then known as the Taniyama–Shimura conjecture) for semistable elliptic curves. Wiles worked on that task for six years in near-total secrecy, covering up his efforts by releasing prior work in small segments as separate papers and confiding only in his wife. His initial study suggested proof by induction, and he based his initial work and first significant breakthrough on Galois theory before switching to an attempt to extend horizontal Iwasawa theory for the inductive argument around 1990–91 when it seemed that there was no existing approach adequate to the problem. However, by mid-1991, Iwasawa theory also seemed to not be reaching the central issues in the problem. In response, he approached colleagues to seek out any hints of cutting-edge research and new techniques, and discovered an Euler system recently developed by Victor Kolyvagin and Matthias Flach that seemed "tailor made" for the inductive part of his proof. Wiles studied and extended this approach, which worked. Since his work relied extensively on this approach, which was new to mathematics and to Wiles, in January 1993 he asked his Princeton colleague, Nick Katz, to help him check his reasoning for subtle errors. Their conclusion at the time was that the techniques Wiles used seemed to work correctly. By mid-May 1993, Wiles was ready to tell his wife he thought he had solved the proof of Fermat's Last Theorem, and by June he felt sufficiently confident to present his results in three lectures delivered on 21–23 June 1993 at the Isaac Newton Institute for Mathematical Sciences. Specifically, Wiles presented his proof of the Taniyama–Shimura conjecture for semistable elliptic curves; together with Ribet's proof of the epsilon conjecture, this implied Fermat's Last Theorem. However, it became apparent during peer review that a critical point in the proof was incorrect. It contained an error in a bound on the order of a particular group. The error was caught by several mathematicians refereeing Wiles's manuscript including Katz (in his role as reviewer), who alerted Wiles on 23 August 1993. The error would not have rendered his work worthless: each part of Wiles's work was highly significant and innovative by itself, as were the many developments and techniques he had created in the course of his work, and only one part was affected. However, without this part proved, there was no actual proof of Fermat's Last Theorem. Wiles spent almost a year trying to repair his proof, initially by himself and then in collaboration with his former student Richard Taylor, without success. By the end of 1993, rumours had spread that under scrutiny, Wiles's proof had failed, but how seriously was not known. Mathematicians were beginning to pressure Wiles to disclose his work whether it was complete or not, so that the wider community could explore and use whatever he had managed to accomplish. But instead of being fixed, the problem, which had originally seemed minor, now seemed very significant, far more serious, and less easy to resolve. Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed, and to publishing his work so that others could build on it and fix the error. He adds that he was having a final look to try and understand the fundamental reasons why his approach could not be made to work, when he had a sudden insight: that the specific reason why the Kolyvagin–Flach approach would not work directly meant that his original attempts using Iwasawa theory could be made to work, if he strengthened it using his experience gained from the Kolyvagin–Flach approach. Fixing one approach with tools from the other approach would resolve the issue for all the cases that were not already proven by his refereed paper. He described later that Iwasawa theory and the Kolyvagin–Flach approach were each inadequate on their own, but together they could be made powerful enough to overcome this final hurdle. On 24 October 1994, Wiles submitted two manuscripts, "Modular elliptic curves and Fermat's Last Theorem" and "Ring theoretic properties of certain Hecke algebras", the second of which was co-authored with Taylor and proved that certain conditions were met that were needed to justify the corrected step in the main paper. The two papers were vetted and published as the entirety of the May 1995 issue of the Annals of Mathematics. The proof's method of identification of a deformation ring with a Hecke algebra (now referred to as an R=T theorem) to prove modularity lifting theorems has been an influential development in algebraic number theory. These papers established the modularity theorem for semistable elliptic curves, the last step in proving Fermat's Last Theorem, 358 years after it was conjectured. Subsequent developments The full Taniyama–Shimura–Weil conjecture was finally proved by Diamond (1996), Conrad et al. (1999), and Breuil et al. (2001) who, building on Wiles's work, incrementally chipped away at the remaining cases until the full result was proved. The now fully proved conjecture became known as the modularity theorem. Several other theorems in number theory similar to Fermat's Last Theorem also follow from the same reasoning, using the modularity theorem. For example: no cube can be written as a sum of two coprime nth powers, . (The case was already known by Euler.) Relationship to other problems and generalizations Fermat's Last Theorem considers solutions to the Fermat equation: with positive integers , , and and an integer greater than 2. There are several generalizations of the Fermat equation to more general equations that allow the exponent to be a negative integer or rational, or to consider three different exponents. Generalized Fermat equation The generalized Fermat equation generalizes the statement of Fermat's last theorem by considering positive integer solutions a, b, c, m, n, k satisfying In particular, the exponents m, n, k need not be equal, whereas Fermat's last theorem considers the case The Beal conjecture, also known as the Mauldin conjecture and the Tijdeman-Zagier conjecture, states that there are no solutions to the generalized Fermat equation in positive integers a, b, c, m, n, k with a, b, and c being pairwise coprime and all of m, n, k being greater than 2. The Fermat–Catalan conjecture generalizes Fermat's last theorem with the ideas of the Catalan conjecture. The conjecture states that the generalized Fermat equation has only finitely many solutions (a, b, c, m, n, k) with distinct triplets of values (am, bn, ck), where a, b, c are positive coprime integers and m, n, k are positive integers satisfying The statement is about the finiteness of the set of solutions because there are 10 known solutions. Inverse Fermat equation When we allow the exponent to be the reciprocal of an integer, i.e. for some integer , we have the inverse Fermat equation . All solutions of this equation were computed by Hendrik Lenstra in 1992. In the case in which the mth roots are required to be real and positive, all solutions are given by for positive integers r, s, t with s and t coprime. Rational exponents For the Diophantine equation with n not equal to 1, Bennett, Glass, and Székely proved in 2004 for , that if n and m are coprime, then there are integer solutions if and only if 6 divides m, and , , and are different complex 6th roots of the same real number. Negative integer exponents n = −1 All primitive integer solutions (i.e., those with no prime factor common to all of a, b, and c) to the optic equation can be written as for positive, coprime integers m, k. n = −2 The case also has an infinitude of solutions, and these have a geometric interpretation in terms of right triangles with integer sides and an integer altitude to the hypotenuse. All primitive solutions to are given by for coprime integers u, v with . The geometric interpretation is that a and b are the integer legs of a right triangle and d is the integer altitude to the hypotenuse. Then the hypotenuse itself is the integer so is a Pythagorean triple. n < −2 There are no solutions in integers for for integers . If there were, the equation could be multiplied through by to obtain , which is impossible by Fermat's Last Theorem. abc conjecture The abc conjecture roughly states that if three positive integers a, b and c (hence the name) are coprime and satisfy , then the radical d of abc is usually not much smaller than c. In particular, the abc conjecture in its most standard formulation implies Fermat's last theorem for n that are sufficiently large. The modified Szpiro conjecture is equivalent to the abc conjecture and therefore has the same implication. An effective version of the abc conjecture, or an effective version of the modified Szpiro conjecture, implies Fermat's Last Theorem outright. Prizes and incorrect proofs In 1816, and again in 1850, the French Academy of Sciences offered a prize for a general proof of Fermat's Last Theorem. In 1857, the academy awarded 3,000 francs and a gold medal to Kummer for his research on ideal numbers, although he had not submitted an entry for the prize. Another prize was offered in 1883 by the Academy of Brussels. In 1908, the German industrialist and amateur mathematician Paul Wolfskehl bequeathed 100,000 gold marks—a large sum at the time—to the Göttingen Academy of Sciences to offer as a prize for a complete proof of Fermat's Last Theorem. On 27 June 1908, the academy published nine rules for awarding the prize. Among other things, these rules required that the proof be published in a peer-reviewed journal; the prize would not be awarded until two years after the publication; and that no prize would be given after 13 September 2007, roughly a century after the competition was begun. Wiles collected the Wolfskehl prize money, then worth $50,000, on 27 June 1997. In March 2016, Wiles was awarded the Norwegian government's Abel prize worth €600,000 for "his stunning proof of Fermat's Last Theorem by way of the modularity conjecture for semistable elliptic curves, opening a new era in number theory". Prior to Wiles's proof, thousands of incorrect proofs were submitted to the Wolfskehl committee, amounting to roughly of correspondence. In the first year alone (1907–1908), 621 attempted proofs were submitted, although by the 1970s, the rate of submission had decreased to roughly 3–4 attempted proofs per month. According to some claims, Edmund Landau tended to use a special preprinted form for such proofs, where the location of the first mistake was left blank to be filled by one of his graduate students. According to F. Schlichting, a Wolfskehl reviewer, most of the proofs were based on elementary methods taught in schools, and often submitted by "people with a technical education but a failed career". In the words of mathematical historian Howard Eves, "Fermat's Last Theorem has the peculiar distinction of being the mathematical problem for which the greatest number of incorrect proofs have been published." In popular culture The popularity of the theorem outside science has led to it being described as achieving "that rarest of mathematical accolades: A niche role in pop culture." Arthur Porges' 1954 short story "The Devil and Simon Flagg" features a mathematician who bargains with the Devil that the latter cannot produce a proof of Fermat's Last Theorem within twenty-four hours. In The Simpsons episode "The Wizard of Evergreen Terrace", Homer Simpson writes the equation on a blackboard, which appears to be a counterexample to Fermat's Last Theorem. The equation is wrong, but it appears to be correct if entered in a calculator with 10 significant figures. In the Star Trek: The Next Generation episode "The Royale", Captain Picard states that the theorem is still unproven in the 24th century. The proof was released five years after the episode originally aired.
Mathematics
Other
null
19028380
https://en.wikipedia.org/wiki/Pelagornis
Pelagornis
Pelagornis is an extinct genus of prehistoric pseudotooth birds, a group of extinct seabirds. Species span from the Oligocene to the Early Pleistocene. Members of Pelagornis represent among the largest pseudotooth birds, with one species. P. sandersi, having the widest wingspan of any bird known. Taxonomy Four species have been formally described, but several other named taxa of pseudotooth birds might belong in Pelagornis too. The type species Pelagornis miocaenus is known from Aquitanian (Early Miocene) sediments – formerly believed to be of Middle Miocene age – of Armagnac (France). The original specimen on which P. miocaenus was founded was a left humerus almost the size of a human arm. The scientific name – "the most unimaginative name ever applied to a fossil" in the view of Storrs L. Olson – does in no way refer to the bird's startling and at that time unprecedented proportions, and merely means "Miocene pelagic bird". Like many pseudotooth birds, it was initially believed to be related to the albatrosses in the tube-nosed seabirds (Procellariiformes), but subsequently placed in the Pelecaniformes where it was either placed in the cormorant and gannet suborder (Sulae) or united with other pseudotooth birds in a suborder Odontopterygia. While P. miocaenus was the first pseudotooth bird species to be described scientifically, its congener Pelagornis mauretanicus was only named in 2008. It was a slightly distinct and markedly younger species. Its remains have been found in 2.5 Ma Gelasian (Late Pliocene/Early Pleistocene, MN17) deposits at Ahl al Oughlam (Morocco). Additional fossils are placed in Pelagornis, usually without assignment to species, mainly due to their large size and Miocene age. From the United States, such specimens have been found in the Middle Miocene Calvert Formation of Maryland and Virginia, and the contemporary Pungo River Formation of the Lee Creek Mine in North Carolina (though at least one other pelagornithid is probably represented among this material too). USNM 244174 (a tarsometatarsus fragment) was found near Charleston, South Carolina and assigned to P. miocaenus, and the slightly smaller left tarsometatarsal middle trochlea USNM 476044 might also belong here. A broken but fairly complete sternum probably of this genus, specimen LHNB (CC-CP)-1, is known from the Serravallian-Tortonian boundary (Middle to Late Miocene) near Costa da Caparica in Portugal. Contemporary are certain specimens from the Bahía Inglesa Formation of Chile, while other material from this formation as well as remains from the Pisco Formation of Peru are from the Late Miocene to Early Pliocene. It is not clear whether the South American fossils – of similar size and age and not including directly comparable bones – are from one or two species. A very worn sternum and some other remains from the Miocene of Oregon as well as roughly contemporary material from California are sometimes assigned to Pelagornis, but this appears to be an error; if not of the contemporary North Pacific Osteodontornis, the specimen is better regarded as indeterminable. Given the distance in space and time involved, all Pacific material may well have been a species different from P. miocaenus or even from birds closer to Osteodontornis. Indeed, some of the older Bahía Inglesa Formation remains tentatively referred to Pelagornis were at first assigned to the mysterious Pseudodontornis longirostris in error, and a proximal (initially misidentified as distal) humerus piece (CMNZ AV 24,960), from the Waiauan (Middle-Late Miocene) cliffs near the mouth of the Waipara River (North Canterbury, New Zealand) seems to differ little from either O. orri or P. miocaenus. The Pisco Formation specimens – which may be from the same species as the Bahía Inglesa ones, or from its direct descendant – on the other hand seem to be well distinct from Osteodontornis. It must be remembered, however, that the Isthmus of Panama had not been formed yet during the Miocene. Pelagornis sandersi was described in July 2014, whose fossil remains date from 25 million years ago, during the Chattian age of the Oligocene. The only known fossil of P. sandersi was first uncovered in 1983 at Charleston International Airport, South Carolina, discovered by James Malcom, while working construction building a new terminal there. At the time the bird lived, 25 million years ago, global temperatures were higher, and the area where it was discovered was an ocean. After excavation, the fossil of P. sandersi was catalogued and put in storage at the Charleston Museum, where it remained until it was rediscovered by paleontologist Dan Ksepka in 2010. The bird is named after Albert Sanders, the former curator of natural history at the Charleston Museum, who led the excavation of P. sandersi. It currently sits at the Charleston Museum, where it was identified as a new species by Ksepka in 2014. Synonyms and relationships A humerus from the Muséum d'Histoire naturelle de Bordeaux was labelled "Pelagornis Delfortrii 1869". Though the name from the label had been listed in the synonymy of P. miocaenus, neither does it seem to be a validly established taxon nor was the specimen compared with P. miocaenus remains. It seems to refer to one of the syntypes of the procellariiform Plotornis delfortrii – found at Léognan (France) and also of Aquitanian age – from which that species was described in the 1870s by Alphonse Milne-Edwards: when the nomen nudum "Pelagornis delfortrii" is listed in the synonymy of P. miocaenus, the pseudotooth bird is claimed to be known from the Léognan deposits also, whereas it has not actually been found there. Pseudodontornis, meanwhile, is a generally Paleogene genus of huge pseudotooth birds. All its species are not uncommonly considered synonymous with earlier-described taxa. The (probably) Eo-Oligocene type species Pseudodontornis longirostris might belong in Pelagornis, though given its uncertain age and provenance a comparison with undisputed Pelagornis material – which is currently lacking – would seem to be necessary before such a step is taken. In that respect, Palaeochenoides mioceanus was also hypothesized to include P. longirostris, and would need to be compared with Pelagornis to see whether it does not belong here too. There has been little dedicated study of the relationships of Pelagornis, for while quite a lot of remains are known from the present genus, those of most other pseudotooth birds are few and far between and direct comparisons are further hampered by the damaged state of most remains. The large Gigantornis eaglesomei from the Middle Eocene Atlantic was established based on a broken but not too incomplete sternum and might actually belong in Dasornis. In Gigantornis the articular facet for the furcula consists of a flat section at the very tip of the sternal keel and a similar one set immediately above it at an outward angle, and the spina externa is shaped like an Old French shield in cross-section. The slightly smaller LHNB (CC-CP)-1 has a less sharply protruding sternal keel, the articular facet for the furcula consists of a large knob at the forward margin, and the spina externa is narrow in cross-section. While these differences are quite conspicuous, the two fossils are clearly of closely related huge dynamically soaring seabirds, and considering the 30 million years or so that separate Gigantornis and LHNB (CC-CP)-1, the Paleogene taxon may be very close to the Miocene bird's ancestor nonwithstanding their differences. In any case, the family name of the pseudotooth birds, Pelagornithidae, as the senior synonym has widely replaced the once-commonly used Pseudodontornithidae. It may be that Pseudodontornis belongs to a distinct lineage of these birds, and then the family name would perhaps be revalidated. Also, the presumed similarity between Dasornis and the smaller Odontopteryx seems to be a symplesiomorphy that is not informative regarding their relationships to each other and with Pelagornis. Rather, it is likely that the huge pseudotooth birds form a clade, and in this case, Pseudodontornithidae like Cyphornithidae and Dasornithidae is correctly placed in the synonymy of Pelagornithidae even if several families were accepted in the Odontopterygiformes. Description Size and wingspan The sole specimen of P. sandersi has a wingspan estimated between approximately , giving it the largest wingspan of any flying bird yet discovered, twice that of the wandering albatross, which has the largest wingspan of any extant bird (up to ). In this regard, it supplants the previous record holder, the also extinct Argentavis magnificens. The skeletal wingspan (excluding feathers) of P. sandersi is estimated at while that of A. magnificens is estimated at . The fossil specimens show that P. miocaenus was one of the largest pseudotooth birds, hardly smaller in size than Osteodontornis or the older Dasornis. Its head must have been about long in life, and its wingspan was probably more than , perhaps closer to . Skull Like all members of the Pelagornithidae, P. sandersi had tooth-like or knob-like extensions of the bill's margin, called "pseudo-teeth," which would have enabled the living animal to better grip and grasp slippery prey. According to Ksepka, P. sandersis teeth "don’t have enamel, they don’t grow in sockets, and they aren’t lost and replaced throughout the creature’s life span." Unlike in its contemporary Osteodontornis but like in the older Pseudodontornis, between each two of Pelagorniss large "teeth" was a single smaller one. The salt glands inside the eye sockets were extremely large and well-developed in Pelagornis. Postcranial skeleton Pelagornis differed from Dasornis and its smaller contemporary Odontopteryx in having no pneumatic foramen in the fossa pneumotricipitalis of the humerus, a single long latissimus dorsi muscle attachment site on the humerus instead of two distinct segments, and no prominent ligamentum collaterale ventrale attachment knob on the ulna. Further differences between Odontopteryx and Pelagornis are found in the tarsometatarsus: in the latter, it has a deep fossa of the hallux' first metatarsal bone, whereas its middle-toe trochlea is not conspicuously expanded forward. From the humerus pieces of specimen LACM 127875, found in the Eo-Oligocene Pittsburg Bluff Formation near Mist, Oregon (United States), P. miocaenus differs in an external tuberosity that is not as much extended towards shoulder and that is separated from the elbow end by a wider depression. The head of the humerus is turned more to the inward side and the large protuberance found there is not as far towards the end. The Waipara River humerus mentioned above agrees with P. miocaenus in that respect. If the Oregon fossils are related to Cyphornis and/or Osteodontornis, and if the traits as found in P. miocaenus and the New Zealand specimen are apomorphic, the latter two may indeed be very close relatives. Paleobiology P. sandersi had short, stumpy legs, and was probably only able to fly by hopping off cliff edges. This is supported by its location being near coasts. Originally, there were controversies over whether or not P. sandersi would be able to fly. Previously, the assumed maximum wingspan of a flying bird was , because it was hypothesized that above 5.2 m, the power required to keep the bird in flight would surpass the power capacity of the bird's muscles. However, this calculation is based on the assumption that the bird in question stays aloft by repeatedly flapping its wings, whereas P. sandersi more likely glided on ocean air currents close to the water, which is less power-intensive than reaching high altitudes. It has been estimated that it was able to fly at up to . P. sandersi's long wingspan and gliding power would have enabled it to travel long distances without landing while hunting. Due to P. sandersi's size, the bird likely molted all of its flight feathers at once, similarly to a grebe, since larger feathers take longer to regrow. P. sandersi is theorized to have glided and traveled similarly to a modern albatross, however, according to Dan Ksepka, its closest modern relatives are chickens and ducks. Some scientists expressed surprise at the idea that this species could fly at all, given that, at between , it would be considered too heavy by the predominant theory of the mechanism by which birds fly. Dan Ksepka of the National Evolutionary Synthesis Center in Durham, North Carolina, who identified that the discovered fossils belonged to a new species, thinks it was able to fly in part because of its relatively small body and long wings, and it spent much of its time over the ocean, like the albatross. Ksepka is currently focused on solving how P. sandersi evolved and what caused the species to go extinct. Distribution Fossils of Pelagornis have been found in: Eocene Aridal Formation (Bartonian), Morocco La Meseta Formation, Seymour Island, Antarctica Oligocene Chandler Bridge Formation, South Carolina Miocene Black Rock Sandstone, Australia Bahía Inglesa Formation (Mayoan-Montehermosan), Chile Molasse Coquilliere Formation, France Calvert Formation, Virginia Waipara River mouth (Waiauan), Canterbury, New Zealand Pisco Formation (Chasicoan-Huayquerian), Peru Costa da Caparica or Fonte de Pipa, Tagus Basin, Portugal Castillo (Colhuehuapian-Santacrucian) and Capadare Formations (Laventan-Mayoan), Venezuela Pliocene Greta Formation, New Zealand Purisima Formation, California and Yorktown Formation, North Carolina Early Pleistocene Ahl al Oughlam, Morocco
Biology and health sciences
Prehistoric birds
Animals
7708603
https://en.wikipedia.org/wiki/Fiddler%20ray
Fiddler ray
Trygonorrhina, also known as the fiddler rays or banjo rays, is a genus of guitarfish, family Rhinobatidae. The two species are found along the eastern and southern coasts of Australia. They are benthic in nature, favoring shallow, sandy bays, rocky reefs, and seagrass beds. The eastern fiddler is found to a length of 120 cm and the southern fiddler to a length of 180 cm. The flattened pectoral fin discs of fiddler rays are shorter and more rounded than those of other guitarfishes. Their tails are slender, with a well-developed caudal fin and two triangular dorsal fins. Their snouts are translucent. The fiddler rays are also distinguished from other guitarfishes in that the anterior nasal flaps of their nostrils are expanded backwards and fused together into a nasal curtain that reaches the mouth. Fiddler rays feed on bottoms shellfish, crabs, and worms, which they crush between their jaws. The eastern fiddler ray is known to scavenge from fish traps. Like other guitarfishes, fiddler rays are ovoviviparous. The egg capsules of the southern fiddler ray are reported to be golden in colour, containing three embryos each. It gives birth to litters of four to six young per breeding cycle. Fiddler rays are harmless and easily approached by divers. Southern fiddler rays are taken as bycatch by commercial trawlers and by recreational fishers; the flesh is of good quality and sold in small quantities. The Magpie fiddler ray (previously Trygonorrhina melaleuca) is now considered a variant of Trygonorrhina dumerilii. Species There are currently 2 recognized species in this genus: Trygonorrhina dumerilii Castelnau, 1873 (Southern fiddler ray) Trygonorrhina fasciata J. P. Müller & Henle, 1841 (Eastern fiddler ray)
Biology and health sciences
Batoidea
Animals
7708951
https://en.wikipedia.org/wiki/Digital%20Terrestrial%20Multimedia%20Broadcast
Digital Terrestrial Multimedia Broadcast
DTMB (Digital Terrestrial Multimedia Broadcast) is the digital TV standard for mobile and fixed devices, developed in the People's Republic of China. It is used there and in both of their special administrative regions (Hong Kong and Macau), and also in Cambodia, the Comoros, Cuba, East Timor, Laos, Vietnam, and Pakistan. In Pakistan, as part of the China–Pakistan Economic Corridor Project, ZTE Corporation will provide Pakistan Television Corporation collaboration across several digital terrestrial television technologies, staff training, and content creation, including partnerships with Chinese multinational companies in multiple areas, such as television sets and set top boxes, as a form of "International Cooperation". Overview Previously known as DMB-T/H (Digital Multimedia Broadcast-Terrestrial/Handheld), the DTMB is a merger of the standards ADTB-T (developed by the Shanghai Jiao Tong University), DMB-T (developed by Tsinghua University), and TiMi (Terrestrial Interactive Multiservice Infrastructure); this last one is the standard proposed by the Academy of Broadcasting Science in 2002. At first, neither Shanghai Jiao Tong University nor Tsinghua had enough political strength to make their own technology become the unique standard, so the final decision was to opt for a double standard, merged with the TIMI 3 standard, responding to a need for backward compatibility. The DTMB was created in 2004 and finally became an official DTT standard in 2006. DTMB in China 2005 trial 18/08/2006 formal adoption as a DTT standard 08/08/2008 analogue to digital switchover 30/11/2020-31/03/2021 analog switchoff DTMB channel available in China National: CCTV-1, 2, 4, 7, 9, 10, 11, 12, 13, 14, 15, CGTN English Provinces: Main channel of province TV in each province High Definition Channel: Varies City or Local channel: Varies DTMB in Hong Kong 18/08/2006 formal adoption as a DTT standard 31/12/2007 analogue to digital switchover 30/11/2020 analogue switchoff DTMB in Macau 18/08/2006 formal adoption as a DTT standard 15/07/2008 analogue to digital switchover 30/06/2023 analogue switchoff DTMB elsewhere DTMB started in Laos in 2007. Cambodia adopted the DTMB standard in 2012. The Comoros chose DTMB in 2013. Cuba adopted DTMB in 2013. In 2017, Pakistan and ZTE signed a contract to deploy DTMB broadcasts in the country by 2020. East Timor adopted DTMB, and work to implement it started in 2019. Versus CMMB See China Multimedia Mobile Broadcasting (CMMB). Countries and territories using DTMB Asia , including its SARs: (trial) Caribbean Africa Countries and territories are available using DTMB Europe Asia – through border with China. – through border with China. – particularly Kalayaan Islands, known for reception of TV signals from Chinese-administered islands, although it is illegal to use due to national security. – through border with China. Africa – Receives TV signals from Comoros. The Americas – particularly Quintana Roo and Yucatan, known for reception of TV signals from Cuba using this technology. – particularly Key West, Florida, known for reception of TV signals from Cuba using this technology. Caribbean – receives TV signals from Cuba. – receives TV signals from Cuba. – receives TV signals from Cuba. Description Besides the basic functions of traditional television service, the DTMB allows additional services using the new television broadcasting system. DTMB system is compatible with fixed reception (indoor and outdoor) and mobile digital terrestrial television. Mobile reception: is compatible with digital broadcasting TV in standard definition (SD), digital audio broadcasting, multimedia broadcasting, and data broadcasting service. Fixed reception: in addition to the previous services, also supports high definition digital broadcasting (HDTV). Modulation The DTMB standard uses many advanced technologies to improve their performance. For example, a pseudo-random noise code (PN) as a guard interval that allows faster synchronization system and a more accurate channel estimation, Low-Density Parity Check (LDPC) for error correction, modulation Time Domain Synchronization - Orthogonal Frequency Division Multiplexing (TDS-OFDM), which allows the combination of broadcasting in SD, HD, and multimedia services, etc. This system gives flexibility to the services offered to support the combination of single-frequency networks (SFN) and multi-frequency networks (MFN). The different modes and parameters can be chosen, depending on the type of service and network's environment. The sequence of pseudo-random pattern is defined in time domain, and the information of the Discrete Fourier transform (DFT) is defined in the frequency domain. The two frames are multiplexed in the time domain, resulting in Time domain synchronization (TDS). Functional scheme This transmission system makes the conversion of the input signal to the output data of terrestrial TV signal. The data passes through the encoder, the error protection process FEC (Forward Error Correction), through the constellation mapping process, and then the interleaving processes the information to create the data blocks. The data block and the TPS information are multiplexed, and pass through the data processor to form the body structure. It combines information from the body and the head to form the frame and this is passed through the SRRC (Square Root Raised Cosine) filter to become a signal within an 8 MHz channel bandwidth. Finally, the signal is modulated to put it in the corresponding frequency band. Features Bit-rate: from 4.813 Mbit/s to 32.486 Mbit/s Combination of SD, HD, and multimedia services Flexibility of services Time and frequency domain of data-processing Broadcasting of between 6 and 15 SD channels and 1 or 2 HD channels Same quality of reception as wire broadcast
Technology
Broadcasting
null
7710104
https://en.wikipedia.org/wiki/Tambaqui
Tambaqui
The tambaqui (Colossoma macropomum) is a large species of freshwater fish in the family Serrasalmidae. It is native to tropical South America, but kept in aquaculture and introduced elsewhere. It is also known by the names black pacu, black-finned pacu, giant pacu, cachama, gamitana, and sometimes as pacu (a name used for several other related species). The tambaqui is currently the only member of Colossoma, but the Piaractus species were also included in this genus in the past. Distribution The tambaqui is native to freshwater habitats in the Amazon and Orinoco basins of tropical South America. In nutrient-rich whitewater rivers such as the Madeira, Juruá, Putumayo (Içá) and Purus it ranges throughout, all the way up to their headwaters. In nutrient-poor blackwater rivers such as the Rio Negro and clearwater rivers such as several rightbank tributaries of the Madeira it generally only occurs in the lower c. and is rare beyond the lowermost c. . It is widely kept in aquaculture outside its native range in South America. Miocene fossils are known from the Magdalena River, but modern occurrence in this river is due to introductions by humans. Description The tambaqui is the heaviest characin in the Americas (the lighter Salminus can grow longer) and the second heaviest scaled freshwater fish in South America (after the arapaima). It can reach up to in total length and in weight, but a more typical size is . The largest caught by rod-and-reel and recognized by IGFA weighed . After the flood season, around 10% of a tambaqui's weight is the visceral fat reserves and at least another 5% is fat found in the head and muscles. It is similar in shape to the piranha and juveniles are sometimes confused with the carnivorous fish; the tambaqui is tall and laterally compressed with large eyes and a slightly arched back. Unlike more predatory species, the teeth of the tambaqui are molar-like, an adaption for crushing plant seeds and nuts. The lower half of its body is typically mainly blackish. The remaining is mainly gray, yellowish or olive, but the exact hue varies considerably and depends in part on habitat with individuals in blackwater being much darker than individuals from whitewater. The pelvic, anal and small pectoral fins are black. The tambaqui resembles the Red-bellied pacu (Piaractus brachypomus), but the latter species has a more rounded head profile (less elongated and pointed) and a smaller adipose fin that lacks rays, as well as differences in teeth and operculum. Hybrids between the tambaqui and the similar Piaractus (both species) have been produced in aquaculture, and are occasionally seen in the wild. The hybrid offspring can be difficult to identify by appearance alone. Ecology Habitat, breeding and migration This species is mostly solitary, but it migrates in large schools. During the non-breeding season, adults stay in flooded forests of white (várzea), clear and blackwater (igapó) rivers. They stay there for four to seven months during the flood season, but as the water level drops they move into the main river channels or to a lesser extent floodplain lakes. At the start of the next flood season, large schools move into whitewater rivers where they spawn between November and February. The exact spawning location in the whitewater rivers is not entirely certain, but apparently along woody shores or grassy levees. The schools then break up as the adults return to the flooded forest of white, clear and blackwater rivers, and the annual pattern is repeated. Larvae are found in whitewater rivers, including the Amazon River itself. Juveniles stay near macrophytes in floodplains and flooded forests year-round, only switching to the adult migration pattern when reaching sexual maturity. Maturity is reached at a length of about . The species regularly reaches an age of 40 years and may reach up to 65. Oxygen, salt and pH resistance When there is not enough oxygen in the river or lake, tambaqui obtain oxygen from the air. They are able to do this by their physical and inner body parts, such as their gills and swim bladder vascularization. Tambaqui is a fish that lives in freshwater. Juveniles can survive in brackish water when the salinity is gradually raised. Salinity levels above 20 g/L result in death. When juveniles are reared in salinities above 10 g/L, there is a significant detrimental effect on growth, haematological parameters and osmoregulation. In an experiment, tambaqui had the pH of their water changed. No deaths occurred to tambaqui if the pH did not fall to 3.0. The only internal difference that was noted in tambaqui when the pH was being altered was a change in the acid-base of the plasma and red cells. In another experiment, tambaquis were exposed to pH drops from 6.0 to 4.0, similar to what they would encounter in their natural habitat. Researchers found that the microbial communities of the tambaqui fish gut were very resilient to the pH drops, which could explain part of the ability of tambaquis to migrate between black and white water streams in the Amazon. Diet Tambaqui consume fruits and seeds, especially from woody angiosperms and herbaceous species. Depending on the quantity and food quality of these foods, it causes the fish to decide on their location of their habitat. In one study during the high-water season, 78—98 percent of the diet consisted of fruits. Another study of the stomach content of 138 specimens during the high-water season found that 44% of the weight was fruits and seeds, 30% was zooplankton and 22% was wild rice. Among 125 specimens during the low-water season, a higher percentage had empty stomachs (14%, about ten times more than in the high-water season) and about 70% of the total stomach content weight was zooplankton. In addition to seeds, fruits, wild rice and zooplankton, smaller levels of insects, snails, shrimps, small fish, filamentous algae and decaying plants are consumed. Seed dispersal The tambaqui plays an important role in dispersing plant seeds. The fruit seeds that fall in the water are consumed by tambaqui and the seed is dispersed somewhere else; this is similar to what birds do. This consumption includes about 35% of the trees and lianas during flood season and these seeds can grow after the floodwater calms down. Compared to the younger and smaller tambaqui, larger and older tambaqui are able to disperse the seeds in a faster rate. The gut of a well-fed tambaqui can contain more than seeds. In general, more seeds are able to pass undamaged through the Red-bellied pacu (Piaractus brachypomus) than the tambaqui, meaning that the former overall is a more efficient seed disperser. Relationship to humans The meat of the tambaqui is popular and fetches top prices in fish markets in its native range. It is marketed fresh and frozen. Wild populations of the tambaqui have declined because of overfishing and many currently caught fish are juveniles. In Manaus alone, the landings fell from c. per year in the 1970s to in 1996. Based on a review by IBAMA, it was the 11th most caught fish by weight in the Brazilian Amazon in 1998 (just ahead of the closely related pirapitinga, Piaractus brachypomus). The tambaqui is now widely kept in aquaculture. It can live in oxygen-poor waters and is very resistant to diseases. In Brazil, tambaqui is one of the main farmed fish species, and therefore important to the country's economy. Studies of farmed tambaqui in Brazil have revealed a genetic diversity similar to that seen among wild populations. In fish farms this species is sometimes hybridized with Piaractus to produce offspring that accept a wider temperature range (colder water) than pure tambaqui. In Thailand, this fish, known locally as pla khu dam (ปลาคู้ดำ), was introduced from Hong Kong and Singapore as part of fish-farming projects, but has adapted to local conditions and thrives in the wild in some areas. There is also an introduced population in Puerto Rico and singles (likely deliberate releases by aquarists) have been caught in a wide range of U.S. states, but only those in the warmest regions can survive. Juveniles long, sometimes labelled as "vegetarian piranha", are frequently seen in the aquarium trade, but they rapidly grow to a large size and require an enormous tank.
Biology and health sciences
Characiformes
Animals
12371603
https://en.wikipedia.org/wiki/Common%20midwife%20toad
Common midwife toad
The common midwife toad (Alytes obstetricans) is a species of midwife frog in the family Alytidae (formerly Discoglossidae). It is found in Belgium, France, Germany, Luxembourg, the Netherlands, Portugal, Spain, Switzerland, and the United Kingdom (although, in the latter, only as an introduction). Like other members of its genus (Alytes), the male toad carries the eggs around entwined on his back and thighs until they are ready to hatch. Its natural habitats are temperate forests, dry forests, shrubland, rivers, freshwater lakes, freshwater marshes, temperate desert, arable land, pastureland and urban areas. It is threatened by habitat loss. Description The common midwife toad can grow to a length of but is usually rather smaller than this, the females generally being larger than the males. It is broad and stocky and has a large head with prominent eyes, the pupils being vertical slits. The skin is mostly smooth with a few small warts and granules and a row of large warts down either side. The parotoid glands are small and there are additional glands in the under arm and ankle regions. There are three tubercles on each metacarpal. The colour is quite variable, often being grey, olive or brown, sometimes speckled with small greenish or brown spots. The large warts are often reddish or yellow. The underside is pale grey often with spots of darker grey on the throat and chest. Distribution and habitat The common midwife toad is found in a number of countries in north west Europe. It is common throughout France and is also found in southern Belgium and the Netherlands, Luxembourg, western Germany and northern and western Switzerland. There are some disconnected outlying populations in Portugal and northern Spain. In the Pyrenees it is found at altitudes of up to . It is usually found not far from water but sometimes wanders away, often living in sunny locations. These include hilly areas, cultivated land, quarries, rocky slopes, gravel pits, woods, parks and gardens. It is active at dusk and through the night, spending the day hidden in undergrowth, in crevices or under logs or stones in a place where it can keep damp. It can dig a burrow with its fore limbs in which to lie and spends the winter hibernating on land. Research has demonstrated that four of the introduced populations in Bedfordshire, England have the same origin, through sequencing of 16S and COI gene sequences. However, due to limitations in the reference database, the researchers can't be sure of the exact location of origin. Researchers have noted a number of limb deformities in the introduced populations found throughout the United Kingdom, which are likely linked to small founder population sizes. Systematics The common midwife toad, (Alytes obstetricans) has four subspecies within its distribution, A. o. almogavarii, A. o. boscai, A. o. obstetricans, and A. o. pertinax. A. o. obstetricans is the subspecies with the largest distribution, spreading from the Iberian Peninsula northward into the rest of its range. The other three subspecies are local to the Iberian Peninsula. These subspecies formed during glacial refugia conditions during the Plio-Pleistocene climatic fluctuations. Due to the genetic differences of these populations, their individual conservation is highly important. Recently, A. o. almogavarii has been recommended as an independent incipient species Alytes almogavarii as it has been shown to be moving towards total impermeable gene flow. Behaviour When threatened, the midwife toad inflates, filling itself with air so as to make it appear as large as possible. It may also rear up on all four limbs, raise its rump and stand in a threatening posture with its head down and eyes shut. Reproduction takes place in spring and summer. The female seeks out a male and invites him to mate. Females are more prone to selecting larger males due to fitness preference. He proceeds to hold her round the flanks and uses his toes to stimulate her cloaca. After about half an hour he squeezes her sides firmly, whereby she stretches her hind legs and ejects a mass of eggs embedded in strings of jelly. The male releases her and inseminates the egg mass with his sperm. A little later, he begins to pull and pummel the egg mass, teasing it out so that he can wrap the strings around his back legs. He can mate again while the eggs are twined round his limbs and can carry up to three clutches of eggs at a time, a total of about 150 eggs. He looks after them until they hatch, in 3 to 8 weeks. He keeps them moist by lying up in a damp place during the day and by going for a swim if there is risk of them drying out. He may secrete a substance through the skin that protects the eggs from infection. When the eggs are about to hatch, he detaches them in a calm stretch of water like a ditch, village pond, spring or drinking trough. There is evidence that suggests that this may include temporary water bodies, such as those found within flowerpot saucers in urban gardens. The eggs hatch into tadpoles, which feed and grow over the course of several months, develop limbs, lose their tails and eventually undergo metamorphosis into juvenile toads. They may overwinter as tadpoles, becoming exceptionally large in the process. Diet Common midwife toads feed mostly on insects and other arthropods, as well as carrion. Role in history of biology, sociology of science The 1971 book by Arthur Koestler The Case of the Midwife Toad, brought the species a role in new thinking on the development of scientific paradigms based on the case of Paul Kammerer who claimed to have shown Lamarckian inheritance in experiments with the toad.
Biology and health sciences
Frogs and toads
Animals
12374274
https://en.wikipedia.org/wiki/Aliquot%20sum
Aliquot sum
In number theory, the aliquot sum of a positive integer is the sum of all proper divisors of , that is, all divisors of other than itself. That is, It can be used to characterize the prime numbers, perfect numbers, sociable numbers, deficient numbers, abundant numbers, and untouchable numbers, and to define the aliquot sequence of a number. Examples For example, the proper divisors of 12 (that is, the positive divisors of 12 that are not equal to 12) are , and 6, so the aliquot sum of 12 is 16 i.e. (). The values of for are: 0, 1, 1, 3, 1, 6, 1, 7, 4, 8, 1, 16, 1, 10, 9, 15, 1, 21, 1, 22, 11, 14, 1, 36, 6, 16, 13, 28, 1, 42, 1, 31, 15, 20, 13, 55, 1, 22, 17, 50, 1, 54, 1, 40, 33, 26, 1, 76, 8, 43, ... Characterization of classes of numbers The aliquot sum function can be used to characterize several notable classes of numbers: 1 is the only number whose aliquot sum is 0. A number is prime if and only if its aliquot sum is 1. The aliquot sums of perfect, deficient, and abundant numbers are equal to, less than, and greater than the number itself respectively. The quasiperfect numbers (if such numbers exist) are the numbers whose aliquot sums equal . The almost perfect numbers (which include the powers of 2, being the only known such numbers so far) are the numbers whose aliquot sums equal . The untouchable numbers are the numbers that are not the aliquot sum of any other number. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable. Paul Erdős proved that their number is infinite. The conjecture that 5 is the only odd untouchable number remains unproven, but would follow from a form of Goldbach's conjecture together with the observation that, for a semiprime number , the aliquot sum is . The mathematicians noted that one of Erdős' "favorite subjects of investigation" was the aliquot sum function. Iteration Iterating the aliquot sum function produces the aliquot sequence of a nonnegative integer (in this sequence, we define ). Sociable numbers are numbers whose aliquot sequence is a periodic sequence. Amicable numbers are sociable numbers whose aliquot sequence has period 2. It remains unknown whether these sequences always end with a prime number, a perfect number, or a periodic sequence of sociable numbers.
Mathematics
Sums and products
null
23195
https://en.wikipedia.org/wiki/Petroleum
Petroleum
Petroleum is a naturally occurring yellowish-black liquid mixture. It consists mainly of hydrocarbons, and is found in geological formations. The term petroleum refers both to naturally occurring unprocessed crude oil, as well as to petroleum products that consist of refined crude oil. Conventional reserves of petroleum are primarily recovered by drilling, which is done after a study of the relevant structural geology, analysis of the sedimentary basin, and characterization of the petroleum reservoir. There are also unconventional reserves such as oil sands and oil shale which are recovered by other means such as fracking. Once extracted, oil is refined and separated, most easily by distillation, into innumerable products for direct use or use in manufacturing. Products include fuels such as gasoline (petrol), diesel, kerosene and jet fuel; asphalt and lubricants; chemical reagents used to make plastics; solvents, textiles, refrigerants, paint, synthetic rubber, fertilizers, pesticides, pharmaceuticals, and thousands of others. Petroleum is used in manufacturing a vast variety of materials essential for modern life, and it is estimated that the world consumes about each day. Petroleum production played a key role in industrialization and economic development. Some countries, known as petrostates, gained significant economic and international power over their control of oil production and trade. Petroleum exploitation can be damaging to the environment and human health. Extraction, refining and burning of petroleum fuels all release large quantities of greenhouse gases, so petroleum is one of the major contributors to climate change. Other negative environmental effects include direct releases, such as oil spills, as well as air and water pollution at almost all stages of use. These environmental effects have direct and indirect health consequences for humans. Oil has also been a source of internal and inter-state conflict, leading to both state-led wars and other resource conflicts. Production of petroleum is estimated to reach peak oil before 2035 as global economies lower dependencies on petroleum as part of climate change mitigation and a transition towards renewable energy and electrification. Etymology The word petroleum comes from Medieval Latin (literally 'rock oil'), which comes from Latin petra 'rock' (from Greek ) and oleum 'oil' (from Greek ). The origin of the term stems from monasteries in southern Italy where it was in use by the end of the first millennium as an alternative for the older term "naphtha". After that, the term was used in numerous manuscripts and books, such as in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, also known as Georgius Agricola. After the advent of the oil industry, during the second half of the 19th century, the term became commonly known for the liquid form of hydrocarbons. History Early Petroleum, in one form or another, has been used since ancient times. More than 4300 years ago, bitumen was mentioned when the Sumerians used it to make boats. A tablet of the legend of the birth of Sargon of Akkad mentions a basket which was closed by straw and bitumen. More than 4000 years ago, according to Herodotus and Diodorus Siculus, asphalt was used in the construction of the walls and towers of Babylon; there were oil pits near Ardericca and Babylon, and a pitch spring on Zakynthos. Great quantities of it were found on the banks of the river Issus, one of the tributaries of the Euphrates. Ancient Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. The use of petroleum in ancient China dates back to more than 2000 years ago. The I Ching, one of the earliest Chinese writings, cites that oil in its raw state, without refining, was first discovered, extracted, and used in China in the first century BCE. In addition, the Chinese were the first to record the use of petroleum as fuel as early as the fourth century BCE. By 347 CE, oil was produced from bamboo-drilled wells in China. In the 7th century, petroleum was among the essential ingredients for Greek fire, an incendiary projectile weapon that was used by Byzantine Greeks against Arab ships, which were then attacking Constantinople. Crude oil was also distilled by Persian chemists, with clear descriptions given in Arabic handbooks such as those of Abu Bakr al-Razi (Rhazes). The streets of Baghdad were paved with tar, derived from petroleum that became accessible from natural fields in the region. In the 9th century, oil fields were exploited in the area around modern Baku, Azerbaijan. These fields were described by the Persian geographer Abu Bakr al-Razi in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads. Arab and Persian chemists also distilled crude oil to produce flammable products for military purposes. Through Islamic Spain, distillation became available in Western Europe by the 12th century. It has also been present in Romania since the 13th century, being recorded as păcură. Sophisticated oil pits, deep, were dug by the Seneca people and other Iroquois in Western Pennsylvania as early as 1415–1450. The French General Louis-Joseph de Montcalm encountered Seneca using petroleum for ceremonial fires and as a healing lotion during a visit to Fort Duquesne in 1750. Early British explorers to Myanmar documented a flourishing oil extraction industry based in Yenangyaung that, in 1795, had hundreds of hand-dug wells under production. Merkwiller-Pechelbronn is said to be the first European site where petroleum has been explored and used. The still active Erdpechquelle, a spring where petroleum appears mixed with water has been used since 1498, notably for medical purposes. 19th century There was activity in various parts of the world in the mid-19th century. A group directed by Major Alexeyev of the Bakinskii Corps of Mining Engineers hand-drilled a well in the Baku region of Bibi-Heybat in 1846. There were engine-drilled wells in West Virginia in the same year as Drake's well. An early commercial well was hand dug in Poland in 1853, and another in nearby Romania in 1857. At around the same time the world's first, small, oil refinery was opened at Jasło in Poland (then Austria), with a larger one opened at Ploiești in Romania shortly after. Romania (then being a vassal of the Ottoman Empire) is the first country in the world to have had its annual crude oil output officially recorded in international statistics: 275 tonnes for 1857. In 1858, Georg Christian Konrad Hunäus found a significant amount of petroleum while drilling for lignite in Wietze, Germany. Wietze later provided about 80% of German consumption in the Wilhelminian Era. The production stopped in 1963, but Wietze has hosted a Petroleum Museum since 1970. Oil sands have been mined since the 18th century. In Wietze in lower Saxony, natural asphalt/bitumen has been explored since the 18th century. Both in Pechelbronn as in Wietze, the coal industry dominated the petroleum technologies. Chemist James Young in 1847 noticed a natural petroleum seepage in the coal mine at riddings Alfreton, Derbyshire from which he distilled a light thin oil suitable for use as lamp oil, at the same time obtaining a more viscous oil suitable for lubricating machinery. In 1848, Young set up a small business refining crude oil. Young eventually succeeded, by distilling cannel coal at low heat, in creating a fluid resembling petroleum, which when treated in the same way as the seep oil gave similar products. Young found that by slow distillation he could obtain several useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax. The production of these oils and solid paraffin wax from coal formed the subject of his patent dated October 17, 1850. In 1850, Young & Meldrum and Edward William Binney entered into partnership under the title of E.W. Binney & Co. at Bathgate in West Lothian and E. Meldrum & Co. at Glasgow; their works at Bathgate were completed in 1851 and became the first truly commercial oil-works in the world with the first modern oil refinery. The world's first oil refinery was built in 1856 by Ignacy Łukasiewicz in Austria. His achievements also included the discovery of how to distill kerosene from seep oil, the invention of the modern kerosene lamp (1853), the introduction of the first modern street lamp in Europe (1853), and the construction of the world's first modern oil "mine" (1854). at Bóbrka, near Krosno (still operational as of 2020). The demand for petroleum as a fuel for lighting in North America and around the world quickly grew. The first oil well in the Americas was drilled in 1859 by Edwin Drake at what is now called the Drake Well in Cherrytree Township, Pennsylvania.There also was a company associated with it, and it sparked a major oil drilling boom. The first commercial oil well in Canada became operational in 1858 at Oil Springs, Ontario (then Canada West). Businessman James Miller Williams dug several wells between 1855 and 1858 before discovering a rich reserve of oil four metres below ground. Williams extracted 1.5 million litres of crude oil by 1860, refining much of it into kerosene lamp oil. Williams's well became commercially viable a year before Drake's Pennsylvania operation and could be argued to be the first commercial oil well in North America. The discovery at Oil Springs touched off an oil boom which brought hundreds of speculators and workers to the area. Advances in drilling continued into 1862 when local driller Shaw reached a depth of 62 metres using the spring-pole drilling method. On January 16, 1862, after an explosion of natural gas, Canada's first oil gusher came into production, shooting into the air at a recorded rate of per day. By the end of the 19th century the Russian Empire, particularly the Branobel company in Azerbaijan, had taken the lead in production. 20th century Access to oil was and still is a major factor in several military conflicts of the 20th century, including World War II, during which oil facilities were a major strategic asset and were extensively bombed. The German invasion of the Soviet Union included the goal to capture the Baku oilfields, as it would provide much-needed oil supplies for the German military which was suffering from blockades. Oil exploration in North America during the early 20th century later led to the U.S. becoming the leading producer by mid-century. As petroleum production in the U.S. peaked during the 1960s, the United States was surpassed by Saudi Arabia and the Soviet Union in total output. In 1973, Saudi Arabia and other Arab nations imposed an oil embargo against the United States, United Kingdom, Japan and other Western nations which supported Israel in the Yom Kippur War of October 1973. The embargo caused an oil crisis. This was followed by the 1979 oil crisis, which was caused by a drop in oil production in the wake of the Iranian Revolution and caused oil prices to more than double. 21st century The two oil price shocks had many short- and long-term effects on global politics and the global economy. They led to sustained reductions in demand as a result of substitution to other fuels, especially coal and nuclear, and improvements in energy efficiency, facilitated by government policies. High oil prices also induced investment in oil production by non-OPEC countries, including Prudhoe Bay in Alaska, the North Sea offshore fields of the United Kingdom and Norway, the Cantarell offshore field of Mexico, and oil sands in Canada. About 90 percent of vehicular fuel needs are met by oil. Petroleum also makes up 40 percent of total energy consumption in the United States, but is responsible for only one percent of electricity generation. Petroleum's worth as a portable, dense energy source powering the vast majority of vehicles and as the base of many industrial chemicals makes it one of the world's most important commodities. The top three oil-producing countries as of 2018 are the United States, Russia, and Saudi Arabia. In 2018, due in part to developments in hydraulic fracturing and horizontal drilling, the United States became the world's largest producer. About 80 percent of the world's readily accessible reserves are located in the Middle East, with 62.5 percent coming from the Arab five: Saudi Arabia, United Arab Emirates, Iraq, Qatar, and Kuwait. A large portion of the world's total oil exists as unconventional sources, such as bitumen in Athabasca oil sands and extra heavy oil in the Orinoco Belt. While significant volumes of oil are extracted from oil sands, particularly in Canada, logistical and technical hurdles remain, as oil extraction requires large amounts of heat and water, making its net energy content quite low relative to conventional crude oil. Thus, Canada's oil sands are not expected to provide more than a few million barrels per day in the foreseeable future. Composition Petroleum consists of a variety of liquid, gaseous, and solid components. Lighter hydrocarbons are the gases methane, ethane, propane and butane. Otherwise, the bulk of the liquid and solids are largely heavier organic compounds, often hydrocarbons (C and H only). The proportion of light hydrocarbons in the petroleum mixture varies among oil fields. An oil well produces predominantly crude oil. Because the pressure is lower at the surface than underground, some of the gas will come out of solution and be recovered (or burned) as associated gas or solution gas. A gas well produces predominantly natural gas. However, because the underground temperature is higher than at the surface, the gas may contain heavier hydrocarbons such as pentane, hexane, and heptane ("natural-gas condensate", often shortened to condensate.) Condensate resembles gasoline in appearance and is similar in composition to some volatile light crude oils. The hydrocarbons in crude oil are mostly alkanes, cycloalkanes and various aromatic hydrocarbons, while the other organic compounds contain nitrogen, oxygen, and sulfur, and traces of metals such as iron, nickel, copper and vanadium. Many oil reservoirs contain live bacteria. The exact molecular composition of crude oil varies widely from formation to formation but the proportion of chemical elements varies over fairly narrow limits as follows: Four different types of hydrocarbon appear in crude oil. The relative percentage of each varies from oil to oil, determining the properties of each oil. The alkanes from pentane (C5H12) to octane (C8H18) are refined into gasoline, the ones from nonane (C9H20) to hexadecane (C16H34) into diesel fuel, kerosene and jet fuel. Alkanes with more than 16 carbon atoms can be refined into fuel oil and lubricating oil. At the heavier end of the range, paraffin wax is an alkane with approximately 25 carbon atoms, while asphalt has 35 and up, although these are usually cracked in modern refineries into more valuable products. The lightest fraction, the so-called petroleum gases are subjected to diverse processing depending on cost. These gases are either flared off, sold as liquefied petroleum gas, or used to power the refinery's own burners. During the winter, butane (C4H10), is blended into the gasoline pool at high rates, because its high vapour pressure assists with cold starts. The aromatic hydrocarbons are unsaturated hydrocarbons that have one or more benzene rings. They tend to burn with a sooty flame, and many have a sweet aroma. Some are carcinogenic. These different components are separated by fractional distillation at an oil refinery to produce gasoline, jet fuel, kerosene, and other hydrocarbon fractions. The components in an oil sample can be determined by gas chromatography and mass spectrometry. Due to the large number of co-eluted hydrocarbons within oil, many cannot be resolved by traditional gas chromatography. This unresolved complex mixture (UCM) of hydrocarbons is particularly apparent when analysing weathered oils and extracts from tissues of organisms exposed to oil. Crude oil varies greatly in appearance depending on its composition. It is usually black or dark brown (although it may be yellowish, reddish, or even greenish). In the reservoir it is usually found in association with natural gas, which being lighter forms a "gas cap" over the petroleum, and saline water which, being heavier than most forms of crude oil, generally sinks beneath it. Crude oil may also be found in a semi-solid form mixed with sand and water, as in the Athabasca oil sands in Canada, where it is usually referred to as crude bitumen. In Canada, bitumen is considered a sticky, black, tar-like form of crude oil which is so thick and heavy that it must be heated or diluted before it will flow. Venezuela also has large amounts of oil in the Orinoco oil sands, although the hydrocarbons trapped in them are more fluid than in Canada and are usually called extra heavy oil. These oil sands resources are called unconventional oil to distinguish them from oil which can be extracted using traditional oil well methods. Between them, Canada and Venezuela contain an estimated of bitumen and extra-heavy oil, about twice the volume of the world's reserves of conventional oil. Formation Fossil petroleum Petroleum is a fossil fuel derived from fossilized organic materials, such as zooplankton and algae. Vast amounts of these remains settled to sea or lake bottoms where they were covered in stagnant water (water with no dissolved oxygen) or sediments such as mud and silt faster than they could decompose aerobically. Approximately 1 m below this sediment, water oxygen concentration was low, below 0.1 mg/L, and anoxic conditions existed. Temperatures also remained constant. As further layers settled into the sea or lake bed, intense heat and pressure built up in the lower regions. This process caused the organic matter to change, first into a waxy material known as kerogen, found in various oil shales around the world, and then with more heat into liquid and gaseous hydrocarbons via a process known as catagenesis. Formation of petroleum occurs from hydrocarbon pyrolysis in a variety of mainly endothermic reactions at high temperatures or pressures, or both. These phases are described in detail below. Anaerobic decay In the absence of plentiful oxygen, aerobic bacteria were prevented from decaying the organic matter after it was buried under a layer of sediment or water. However, anaerobic bacteria were able to reduce sulfates and nitrates among the matter to H2S and N2 respectively by using the matter as a source for other reactants. Due to such anaerobic bacteria, at first, this matter began to break apart mostly via hydrolysis: polysaccharides and proteins were hydrolyzed to simple sugars and amino acids respectively. These were further anaerobically oxidized at an accelerated rate by the enzymes of the bacteria: e.g., amino acids went through oxidative deamination to imino acids, which in turn reacted further to ammonia and α-keto acids. Monosaccharides in turn ultimately decayed to CO2 and methane. The anaerobic decay products of amino acids, monosaccharides, phenols and aldehydes combined into fulvic acids. Fats and waxes were not extensively hydrolyzed under these mild conditions. Kerogen formation Some phenolic compounds produced from previous reactions worked as bactericides and the actinomycetales order of bacteria also produced antibiotic compounds (e.g., streptomycin). Thus the action of anaerobic bacteria ceased at about 10 m below the water or sediment. The mixture at this depth contained fulvic acids, unreacted and partially reacted fats and waxes, slightly modified lignin, resins and other hydrocarbons. As more layers of organic matter settled into the sea or lake bed, intense heat and pressure built up in the lower regions. As a consequence, compounds of this mixture began to combine in poorly understood ways to kerogen. Combination happened in a similar fashion as phenol and formaldehyde molecules react to urea-formaldehyde resins, but kerogen formation occurred in a more complex manner due to a bigger variety of reactants. The total process of kerogen formation from the beginning of anaerobic decay is called diagenesis, a word that means a transformation of materials by dissolution and recombination of their constituents. Transformation of kerogen into fossil fuels Kerogen formation continued to a depth of about 1 km from the Earth's surface where temperatures may reach around 50 °C. Kerogen formation represents a halfway point between organic matter and fossil fuels: kerogen can be exposed to oxygen, oxidize and thus be lost, or it could be buried deeper inside the Earth's crust and be subjected to conditions which allow it to slowly transform into fossil fuels like petroleum. The latter happened through catagenesis in which the reactions were mostly radical rearrangements of kerogen. These reactions took thousands to millions of years and no external reactants were involved. Due to the radical nature of these reactions, kerogen reacted towards two classes of products: those with low H/C ratio (anthracene or products similar to it) and those with high H/C ratio (methane or products similar to it); i.e., carbon-rich or hydrogen-rich products. Because catagenesis was closed off from external reactants, the resulting composition of the fuel mixture was dependent on the composition of the kerogen via reaction stoichiometry. Three types of kerogen exist: type I (algal), II (liptinic) and III (humic), which were formed mainly from algae, plankton and woody plants (this term includes trees, shrubs and lianas) respectively. Catagenesis was pyrolytic despite the fact that it happened at relatively low temperatures (when compared to commercial pyrolysis plants) of 60 to several hundred °C. Pyrolysis was possible because of the long reaction times involved. Heat for catagenesis came from the decomposition of radioactive materials of the crust, especially 40K, 232Th, 235U and 238U. The heat varied with geothermal gradient and was typically 10–30 °C per km of depth from the Earth's surface. Unusual magma intrusions, however, could have created greater localized heating. Oil window (temperature range) Geologists often refer to the temperature range in which oil forms as an "oil window". Below the minimum temperature oil remains trapped in the form of kerogen. Above the maximum temperature the oil is converted to natural gas through the process of thermal cracking. Sometimes, oil formed at extreme depths may migrate and become trapped at a much shallower level. The Athabasca oil sands are one example of this. Abiogenic petroleum An alternative mechanism to the one described above was proposed by Russian scientists in the mid-1850s, the hypothesis of abiogenic petroleum origin (petroleum formed by inorganic means), but this is contradicted by geological and geochemical evidence. Abiogenic sources of oil have been found, but never in commercially profitable amounts. "The controversy isn't over whether abiogenic oil reserves exist," said Larry Nation of the American Association of Petroleum Geologists. "The controversy is over how much they contribute to Earth's overall reserves and how much time and effort geologists should devote to seeking them out." Reservoirs Three conditions must be present for oil reservoirs to form: A source rock rich in hydrocarbon material buried deeply enough for subterranean heat to cook it into oil, A porous and permeable reservoir rock where it can accumulate, A caprock (seal) or other mechanism to prevent the oil from escaping to the surface. Within these reservoirs, fluids will typically organize themselves like a three-layer cake with a layer of water below the oil layer and a layer of gas above it, although the different layers vary in size between reservoirs. Because most hydrocarbons are less dense than rock or water, they often migrate upward through adjacent rock layers until either reaching the surface or becoming trapped within porous rocks (known as reservoirs) by impermeable rocks above. However, the process is influenced by underground water flows, causing oil to migrate hundreds of kilometres horizontally or even short distances downward before becoming trapped in a reservoir. When hydrocarbons are concentrated in a trap, an oil field forms, from which the liquid can be extracted by drilling and pumping. The reactions that produce oil and natural gas are often modeled as first order breakdown reactions, where hydrocarbons are broken down to oil and natural gas by a set of parallel reactions, and oil eventually breaks down to natural gas by another set of reactions. The latter set is regularly used in petrochemical plants and oil refineries. Petroleum has mostly been recovered by oil drilling (natural petroleum springs are rare). Drilling is carried out after studies of structural geology (at the reservoir scale), sedimentary basin analysis, and reservoir characterisation (mainly in terms of the porosity and permeability of geologic reservoir structures). Wells are drilled into oil reservoirs to extract the crude oil. "Natural lift" production methods that rely on the natural reservoir pressure to force the oil to the surface are usually sufficient for a while after reservoirs are first tapped. In some reservoirs, such as in the Middle East, the natural pressure is sufficient over a long time. The natural pressure in most reservoirs, however, eventually dissipates. Then the oil must be extracted using "artificial lift" means. Over time, these "primary" methods become less effective and "secondary" production methods may be used. A common secondary method is "waterflood" or injection of water into the reservoir to increase pressure and force the oil to the drilled shaft or "wellbore." Eventually "tertiary" or "enhanced" oil recovery methods may be used to increase the oil's flow characteristics by injecting steam, carbon dioxide and other gases or chemicals into the reservoir. In the United States, primary production methods account for less than 40 percent of the oil produced on a daily basis, secondary methods account for about half, and tertiary recovery the remaining 10 percent. Extracting oil (or "bitumen") from oil/tar sand and oil shale deposits requires mining the sand or shale and heating it in a vessel or retort, or using "in-situ" methods of injecting heated liquids into the deposit and then pumping the liquid back out saturated with oil. Unconventional oil reservoirs Oil-eating bacteria biodegrade oil that has escaped to the surface. Oil sands are reservoirs of partially biodegraded oil still in the process of escaping and being biodegraded, but they contain so much migrating oil that, although most of it has escaped, vast amounts are still present—more than can be found in conventional oil reservoirs. The lighter fractions of the crude oil are destroyed first, resulting in reservoirs containing an extremely heavy form of crude oil, called crude bitumen in Canada, or extra-heavy crude oil in Venezuela. These two countries have the world's largest deposits of oil sands. On the other hand, oil shales are source rocks that have not been exposed to heat or pressure long enough to convert their trapped hydrocarbons into crude oil. Technically speaking, oil shales are not always shales and do not contain oil, but are fined-grain sedimentary rocks containing an insoluble organic solid called kerogen. The kerogen in the rock can be converted into crude oil using heat and pressure to simulate natural processes. The method has been known for centuries and was patented in 1694 under British Crown Patent No. 330 covering, "A way to extract and make great quantities of pitch, tar, and oil out of a sort of stone." Although oil shales are found in many countries, the United States has the world's largest deposits. Classification The petroleum industry generally classifies crude oil by the geographic location it is produced in (e.g., West Texas Intermediate, Brent, or Oman), its API gravity (an oil industry measure of density), and its sulfur content. Crude oil may be considered light if it has low density, heavy if it has high density, or medium if it has a density between that of light and heavy. Additionally, it may be referred to as sweet if it contains relatively little sulfur or sour if it contains substantial amounts of sulfur. The geographic location is important because it affects transportation costs to the refinery. Light crude oil is more desirable than heavy oil since it produces a higher yield of gasoline, while sweet oil commands a higher price than sour oil because it has fewer environmental problems and requires less refining to meet sulfur standards imposed on fuels in consuming countries. Each crude oil has unique molecular characteristics which are revealed by the use of crude oil assay analysis in petroleum laboratories. Barrels from an area in which the crude oil's molecular characteristics have been determined and the oil has been classified are used as pricing references throughout the world. Some of the common reference crudes are: West Texas Intermediate (WTI), a very high-quality, sweet, light oil delivered at Cushing, Oklahoma for North American oil Brent Blend, consisting of 15 oils from fields in the Brent and Ninian systems in the East Shetland Basin of the North Sea. The oil landed at Sullom Voe terminal in Shetland. Oil production from Europe, Africa and Middle Eastern oil flowing West tends to be priced off this oil, which forms a benchmark Dubai-Oman, used as a benchmark for the Middle East sour crude oil flowing to the Asia-Pacific region Tapis (from Malaysia, used as a reference for light Far East oil) Minas (from Indonesia, used as a reference for heavy Far East oil) The OPEC Reference Basket, a weighted average of oil blends from various OPEC (Organization of the Petroleum Exporting Countries) countries Midway Sunset Heavy, by which heavy oil in California is priced Western Canadian Select the benchmark crude oil for emerging heavy, high TAN (acidic) crudes. There are declining amounts of these benchmark oils being produced each year, so other oils are more commonly what is actually delivered. While the reference price may be for West Texas Intermediate delivered at Cushing, the actual oil being traded may be a discounted Canadian heavy oil – Western Canadian Select – delivered at Hardisty, Alberta, and for a Brent Blend delivered at Shetland, it may be a discounted Russian Export Blend delivered at the port of Primorsk. Once extracted, oil is refined and separated, most easily by distillation, into numerous products for direct use or use in manufacturing, such as gasoline (petrol), diesel and kerosene to asphalt and chemical reagents (ethylene, propylene, butene, acrylic acid, para-xylene) used to make plastics, pesticides and pharmaceuticals. Use In terms of volume, most petroleum is converted into fuels for combustion engines. In terms of value, petroleum underpins the petrochemical industry, which includes many high value products such as pharmaceuticals and plastics. Fuels and lubricants Petroleum is used mostly, by volume, for refining into fuel oil and gasoline, both important primary energy sources. 84% by volume of the hydrocarbons present in petroleum is converted into fuels, including gasoline, diesel, jet, heating, and other fuel oils, and liquefied petroleum gas. Due to its high energy density, easy transportability and relative abundance, oil has become the world's most important source of energy since the mid-1950s. Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, and plastics; the 16 percent not used for energy production is converted into these other materials. Petroleum is found in porous rock formations in the upper strata of some areas of the Earth's crust. There is also petroleum in oil sands (tar sands). Known oil reserves are typically estimated at 190 km3 (1.2 trillion (short scale) barrels) without oil sands, or 595 km3 (3.74 trillion barrels) with oil sands. Consumption is currently around per day, or 4.9 km3 per year, yielding a remaining oil supply of only about 120 years, if current demand remains static. More recent studies, however, put the number at around 50 years. Closely related to fuels for combustion engines are Lubricants, greases, and viscosity stabilizers. All are derived from petroleum. Chemicals Many pharmaceuticals are derived from petroleum, albeit via multistep processes. Modern medicine depends on petroleum as a source of building blocks, reagents, and solvents. Similarly, virtually all pesticides - insecticides, herbicides, etc. - are derived from petroleum. Pesticides have profoundly affected life expectancies by controlling disease vectors and by increasing yields of crops. Like pharmaceuticals, pesticides are in essence petrochemicals. Almost all plastics and synthetic polymers are derived from petroleum, which is the source of monomers. Alkenes (olefins) are one important class of these precursor molecules. Other derivatives Wax, used in the packaging of frozen foods, among others, Paraffin wax, derived from petroleum oil. Sulfur and its derivative sulfuric acid. Hydrogen sulfide is a product of sulfur removal from petroleum fraction. It is oxidized to elemental sulfur and then to sulfuric acid. Bulk tar and Asphalt Petroleum coke, used in speciality carbon products or as solid fuel Industry Transport In the 1950s, shipping costs made up 33 percent of the price of oil transported from the Persian Gulf to the United States, but due to the development of supertankers in the 1970s, the cost of shipping dropped to only 5 percent of the price of Persian oil in the US. Due to the increase in the value of crude oil during the last 30 years, the share of the shipping cost on the final cost of the delivered commodity was less than 3% in 2010. Price Trade Crude oil is traded as a future on both the NYMEX and ICE exchanges. Futures contracts are agreements in which buyers and sellers agree to purchase and deliver specific amounts of physical crude oil on a given date in the future. A contract covers any multiple of 1000 barrels and can be purchased up to nine years into the future. Use by country Consumption statistics Consumption According to the US Energy Information Administration (EIA) estimate for 2021, the world consumes 97.26 million barrels of oil each day. This table orders the amount of petroleum consumed in 2011 in thousand barrels (1,000 bbl) per day and in thousand cubic metres (1,000 m3) per day: Source: US Energy Information Administration Population Data: 1 peak production of oil already passed in this state 2 This country is not a major oil producer Production In petroleum industry parlance, production refers to the quantity of crude extracted from reserves, not the literal creation of the product. Exportation In order of net exports in 2011, 2009 and 2006 in thousand bbl/d and thousand m3/d: Source: US Energy Information Administration 1 peak production already passed in this state 2 Canadian statistics are complicated by the fact it is both an importer and exporter of crude oil, and refines large amounts of oil for the U.S. market. It is the leading source of U.S. imports of oil and products, averaging in August 2007. Total world production/consumption (as of 2005) is approximately . Importation In order of net imports in 2011, 2009 and 2006 in thousand bbl/d and thousand m3/d: Source: US Energy Information Administration Non-producing consumers Countries whose oil production is 10% or less of their consumption. Source: CIA World Factbook Environmental effects Climate , about a quarter of annual global greenhouse gas emissions is the carbon dioxide from burning petroleum (plus methane leaks from the industry). Along with the burning of coal, petroleum combustion is the largest contributor to the increase in atmospheric CO2. Atmospheric CO2 has risen over the last 150 years to current levels of over 415 ppmv, from the 180–300 ppmv of the prior 800 thousand years. The rise in Arctic temperature has reduced the minimum Arctic ice pack to , a loss of almost half since satellite measurements started in 1979. Ocean acidification is the increase in the acidity of the Earth's oceans caused by the uptake of carbon dioxide () from the atmosphere.The saturation state of calcium carbonate decreases with the uptake of carbon dioxide in the ocean. This increase in acidity inhibits all marine life—having a greater effect on smaller organisms as well as shelled organisms (see scallops). Extraction Oil extraction is simply the removal of oil from the reservoir (oil pool). There are many methods on extracting the oil from the reservoirs for example; mechanical shaking, water-in-oil emulsion, and specialty chemicals called demulsifiers that separate the oil from water. Oil extraction is costly and often environmentally damaging. Offshore exploration and extraction of oil disturb the surrounding marine environment. Oil spills Crude oil and refined fuel spills from tanker ship accidents have damaged natural ecosystems and human livelihoods in Alaska, the Gulf of Mexico, the Galápagos Islands, France and many other places. The quantity of oil spilled during accidents has ranged from a few hundred tons to several hundred thousand tons (e.g., Deepwater Horizon oil spill, SS Atlantic Empress, Amoco Cadiz). Smaller spills have already proven to have a great impact on ecosystems, such as the Exxon Valdez oil spill. Oil spills at sea are generally much more damaging than those on land, since they can spread for hundreds of nautical miles in a thin oil slick which can cover beaches with a thin coating of oil. This can kill sea birds, mammals, shellfish, and other organisms it coats. Oil spills on land are more readily containable if a makeshift earth dam can be rapidly bulldozed around the spill site before most of the oil escapes, and land animals can avoid the oil more easily. Control of oil spills is difficult, requires ad hoc methods, and often a large amount of manpower. The dropping of bombs and incendiary devices from aircraft on the wreck produced poor results; modern techniques would include pumping the oil from the wreck, like in the Prestige oil spill or the Erika oil spill. Though crude oil is predominantly composed of various hydrocarbons, certain nitrogen heterocyclic compounds, such as pyridine, picoline, and quinoline are reported as contaminants associated with crude oil, as well as facilities processing oil shale or coal, and have also been found at legacy wood treatment sites. These compounds have a very high water solubility, and thus tend to dissolve and move with water. Certain naturally occurring bacteria, such as Micrococcus, Arthrobacter, and Rhodococcus have been shown to degrade these contaminants. Because petroleum is a naturally occurring substance, its presence in the environment does not need to be the result of human causes such as accidents and routine activities (seismic exploration, drilling, extraction, refining and combustion). Phenomena such as seeps and tar pits are examples of areas that petroleum affects without man's involvement. Tarballs A tarball is a blob of crude oil (not to be confused with tar, which is a human-made product derived from pine trees or refined from petroleum) which has been weathered after floating in the ocean. Tarballs are an aquatic pollutant in most environments, although they can occur naturally, for example in the Santa Barbara Channel of California or in the Gulf of Mexico off Texas. Their concentration and features have been used to assess the extent of oil spills. Their composition can be used to identify their sources of origin, and tarballs themselves may be dispersed over long distances by deep sea currents. They are slowly decomposed by bacteria, including Chromobacterium violaceum, Cladosporium resinae, Bacillus submarinus, Micrococcus varians, Pseudomonas aeruginosa, Candida marina and Saccharomyces estuari. Whales James S. Robbins has argued that the advent of petroleum-refined kerosene saved some species of great whales from extinction by providing an inexpensive substitute for whale oil, thus eliminating the economic imperative for open-boat whaling, but others say that fossil fuels increased whaling with most whales being killed in the 20th century. Alternatives In 2018 road transport used 49% of petroleum, aviation 8%, and uses other than energy 17%. Electric vehicles are the main alternative for road transport and biojet for aviation. Single-use plastics have a high carbon footprint and may pollute the sea, but as of 2022 the best alternatives are unclear. International relations Control of petroleum production has been a significant driver of international relations during much of the 20th and 21st centuries. Organizations like OPEC have played an outsized role in international politics. Some historians and commentators have called this the "Age of Oil" With the rise of renewable energy and addressing climate change some commentators expect a realignment of international power away from petrostates. Corruption "Oil rents" have been described as connected with corruption in political literature. A 2011 study suggested that increases in oil rents increased corruption in countries with heavy government involvement in the production of oil. The study found that increases in oil rents "significantly deteriorates political rights". The investigators say that oil exploitation gave politicians "an incentive to extend civil liberties but reduce political rights in the presence of oil windfalls to evade redistribution and conflict". Conflict Petroleum production has been linked with conflict for many years, leading to thousands of deaths. Petroleum deposits are in hardly any countries around the world; mainly in Russia and some parts of the middle east. Conflicts may start when countries refuse to cut oil production in which other countries respond to such actions by increasing their production causing a trade war as experienced during the 2020 Russia–Saudi Arabia oil price war. Other conflicts start due to countries wanting petroleum resources or other reasons on oil resource territory experienced in the Iran–Iraq War. OPEC Future production Consumption in the twentieth and twenty-first centuries has been abundantly pushed by automobile sector growth. The 1985–2003 oil glut even fueled the sales of low fuel economy vehicles in OECD countries. The 2008 economic crisis seems to have had some impact on the sales of such vehicles; still, in 2008 oil consumption showed a small increase. In 2016 Goldman Sachs predicted lower demand for oil due to emerging economies concerns, especially China. The BRICS (Brasil, Russia, India, China, South Africa) countries might also kick in, as China briefly had the largest automobile market in December 2009. In the long term, uncertainties linger; the OPEC believes that the OECD countries will push low consumption policies at some point in the future; when that happens, it will definitely curb oil sales, and both OPEC and the Energy Information Administration (EIA) kept lowering their 2020 consumption estimates during the past five years. A detailed review of International Energy Agency oil projections have revealed that revisions of world oil production, price and investments have been motivated by a combination of demand and supply factors. All together, Non-OPEC conventional projections have been fairly stable the last 15 years, while downward revisions were mainly allocated to OPEC. Upward revisions are primarily a result of US tight oil. Production will also face an increasingly complex situation; while OPEC countries still have large reserves at low production prices, newly found reservoirs often lead to higher prices; offshore giants such as Tupi, Guara and Tiber demand high investments and ever-increasing technological abilities. Subsalt reservoirs such as Tupi were unknown in the twentieth century, mainly because the industry was unable to probe them. Enhanced Oil Recovery (EOR) techniques (example: DaQing, China) will continue to play a major role in increasing the world's recoverable oil. The expected availability of petroleum resources has always been around 35 years or even less since the start of the modern exploration. The oil constant, an insider pun in the German industry, refers to that effect. A growing number of divestment campaigns from major funds pushed by newer generations who question the sustainability of petroleum may hinder the financing of future oil prospection and production. Peak oil Peak oil is a term applied to the projection that future petroleum production, whether for individual oil wells, entire oil fields, whole countries, or worldwide production, will eventually peak and then decline at a similar rate to the rate of increase before the peak as these reserves are exhausted. The peak of oil discoveries was in 1965, and oil production per year has surpassed oil discoveries every year since 1980. However, this does not mean that potential oil production has surpassed oil demand. It is difficult to predict the oil peak in any given region, due to the lack of knowledge and/or transparency in the accounting of global oil reserves. Based on available production data, proponents have previously predicted the peak for the world to be in the years 1989, 1995, or 1995–2000. Some of these predictions date from before the recession of the early 1980s, and the consequent lowering in global consumption, the effect of which was to delay the date of any peak by several years. Just as the 1971 U.S. peak in oil production was only clearly recognized after the fact, a peak in world production will be difficult to discern until production clearly drops off. In 2020, according to BP's Energy Outlook 2020, peak oil had been reached, due to the changing energy landscape coupled with the economic toll of the COVID-19 pandemic. While there has been much focus historically on peak oil supply, the focus is increasingly shifting to peak demand as more countries seek to transition to renewable energy. The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former oil exporters are expected to lose power, while the positions of former oil importers and countries rich in renewable energy resources is expected to strengthen. Unconventional oil Unconventional oil is petroleum produced or extracted using techniques other than the conventional methods. The calculus for peak oil has changed with the introduction of unconventional production methods. In particular, the combination of horizontal drilling and hydraulic fracturing has resulted in a significant increase in production from previously uneconomic plays. Certain rock strata contain hydrocarbons but have low permeability and are not thick from a vertical perspective. Conventional vertical wells would be unable to economically retrieve these hydrocarbons. Horizontal drilling, extending horizontally through the strata, permits the well to access a much greater volume of the strata. Hydraulic fracturing creates greater permeability and increases hydrocarbon flow to the wellbore. Hydrocarbons on other worlds On Saturn's largest moon, Titan, lakes of liquid hydrocarbons comprising methane, ethane, propane and other constituents, occur naturally. Data collected by the space probe Cassini–Huygens yield an estimate that the visible lakes and seas of Titan contain about 300 times the volume of Earth's proven oil reserves. Drilled samples from the surface of Mars taken in 2015 by the Curiosity rover's Mars Science Laboratory have found organic molecules of benzene and propane in 3-billion-year-old rock samples in Gale Crater. In fiction
Technology
Energy
null
23204
https://en.wikipedia.org/wiki/Physical%20quantity
Physical quantity
A physical quantity (or simply quantity) is a property of a material or system that can be quantified by measurement. A physical quantity can be expressed as a value, which is the algebraic multiplication of a numerical value and a unit of measurement. For example, the physical quantity mass, symbol m, can be quantified as mnkg, where n is the numerical value and kg is the unit symbol (for kilogram). Quantities that are vectors have, besides numerical value and unit, direction or orientation in space. Components Following ISO 80000-1, any value or magnitude of a physical quantity is expressed as a comparison to a unit of that quantity. The value of a physical quantity Z is expressed as the product of a numerical value {Z} (a pure number) and a unit [Z]: For example, let be "2 metres"; then, is the numerical value and is the unit. Conversely, the numerical value expressed in an arbitrary unit can be obtained as: The multiplication sign is usually left out, just as it is left out between variables in the scientific notation of formulas. The convention used to express quantities is referred to as quantity calculus. In formulas, the unit [Z] can be treated as if it were a specific magnitude of a kind of physical dimension: see Dimensional analysis for more on this treatment. Symbols and nomenclature International recommendations for the use of symbols for quantities are set out in ISO/IEC 80000, the IUPAP red book and the IUPAC green book. For example, the recommended symbol for the physical quantity "mass" is m, and the recommended symbol for the quantity "electric charge" is Q. Typography Physical quantities are normally typeset in italics. Purely numerical quantities, even those denoted by letters, are usually printed in roman (upright) type, though sometimes in italics. Symbols for elementary functions (circular trigonometric, hyperbolic, logarithmic etc.), changes in a quantity like Δ in Δy or operators like d in dx, are also recommended to be printed in roman type. Examples: Real numbers, such as 1 or , e, the base of natural logarithms, i, the imaginary unit, π for the ratio of a circle's circumference to its diameter, 3.14159265... δx, Δy, dz, representing differences (finite or otherwise) in the quantities x, y and z sin α, sinh γ, log x Support Scalars A scalar is a physical quantity that has magnitude but no direction. Symbols for physical quantities are usually chosen to be a single letter of the Latin or Greek alphabet, and are printed in italic type. Vectors Vectors are physical quantities that possess both magnitude and direction and whose operations obey the axioms of a vector space. Symbols for physical quantities that are vectors are in bold type, underlined or with an arrow above. For example, if u is the speed of a particle, then the straightforward notations for its velocity are u, u, or . Tensors Scalar and vector quantities are the simplest tensor quantities, which are tensors can be used to describe more general physical properties. For example, the Cauchy stress tensor possesses magnitude, direction, and orientation qualities. Dimensions, units, and kind Dimensions The notion of dimension of a physical quantity was introduced by Joseph Fourier in 1822. By convention, physical quantities are organized in a dimensional system built upon base quantities, each of which is regarded as having its own dimension. Unit There is often a choice of unit, though SI units are usually used in scientific contexts due to their ease of use, international familiarity and prescription. For example, a quantity of mass might be represented by the symbol m, and could be expressed in the units kilograms (kg), pounds (lb), or daltons (Da). Kind Dimensional homogeneity is not necessarily sufficient for quantities to be comparable; for example, both kinematic viscosity and thermal diffusivity have dimension of square length per time (in units of m2/s). Quantities of the same kind share extra commonalities beyond their dimension and units allowing their comparison; for example, not all dimensionless quantities are of the same kind. Base and derived quantities Base quantities A systems of quantities relates physical quantities, and due to this dependence, a limited number of quantities can serve as a basis in terms of which the dimensions of all the remaining quantities of the system can be defined. A set of mutually independent quantities may be chosen by convention to act as such a set, and are called base quantities. The seven base quantities of the International System of Quantities (ISQ) and their corresponding SI units and dimensions are listed in the following table. Other conventions may have a different number of base units (e.g. the CGS and MKS systems of units). The angular quantities, plane angle and solid angle, are defined as derived dimensionless quantities in the SI. For some relations, their units radian and steradian can be written explicitly to emphasize the fact that the quantity involves plane or solid angles. General derived quantities Derived quantities are those whose definitions are based on other physical quantities (base quantities). Space Important applied base units for space and time are below. Area and volume are thus, of course, derived from the length, but included for completeness as they occur frequently in many derived quantities, in particular densities. Densities, flows, gradients, and moments Important and convenient derived quantities such as densities, fluxes, flows, currents are associated with many quantities. Sometimes different terms such as current density and flux density, rate, frequency and current, are used interchangeably in the same context; sometimes they are used uniquely. To clarify these effective template-derived quantities, we use q to stand for any quantity within some scope of context (not necessarily base quantities) and present in the table below some of the most commonly used symbols where applicable, their definitions, usage, SI units and SI dimensions – where [q] denotes the dimension of q. For time derivatives, specific, molar, and flux densities of quantities, there is no one symbol; nomenclature depends on the subject, though time derivatives can be generally written using overdot notation. For generality we use qm, qn, and F respectively. No symbol is necessarily required for the gradient of a scalar field, since only the nabla/del operator ∇ or grad needs to be written. For spatial density, current, current density and flux, the notations are common from one context to another, differing only by a change in subscripts. For current density, is a unit vector in the direction of flow, i.e. tangent to a flowline. Notice the dot product with the unit normal for a surface, since the amount of current passing through the surface is reduced when the current is not normal to the area. Only the current passing perpendicular to the surface contributes to the current passing through the surface, no current passes in the (tangential) plane of the surface. The calculus notations below can be used synonymously. If X is a n-variable function , then Differential The differential n-space volume element is , Integral: The multiple integral of X over the n-space volume is .
Physical sciences
Physics basics: General
Physics
23205
https://en.wikipedia.org/wiki/Physical%20constant
Physical constant
A physical constant, sometimes fundamental physical constant or universal constant, is a physical quantity that cannot be explained by a theory and therefore must be measured experimentally. It is distinct from a mathematical constant, which has a fixed numerical value, but does not directly involve any physical measurement. There are many physical constants in science, some of the most widely recognized being the speed of light in vacuum c, the gravitational constant G, the Planck constant h, the electric constant ε0, and the elementary charge e. Physical constants can take many dimensional forms: the speed of light signifies a maximum speed for any object and its dimension is length divided by time; while the proton-to-electron mass ratio is dimensionless. The term "fundamental physical constant" is sometimes used to refer to universal-but-dimensioned physical constants such as those mentioned above. Increasingly, however, physicists reserve the expression for the narrower case of dimensionless universal physical constants, such as the fine-structure constant α, which characterizes the strength of the electromagnetic interaction. Physical constants, as discussed here, should not be confused with empirical constants, which are coefficients or parameters assumed to be constant in a given context without being fundamental. Examples include the characteristic time, characteristic length, or characteristic number (dimensionless) of a given system, or material constants (e.g., Madelung constant, electrical resistivity, and heat capacity) of a particular material or substance. Characteristics Physical constants are parameters in a physical theory that cannot be explained by that theory. This may be due to the apparent fundamental nature of the constant or due to limitations in the theory. Consequently, physical constants must be measured experimentally. The set of parameters considered physical constants change as physical models change and how fundamental they appear can change. For example, , the speed of light, was originally considered a property of light, a specific system. The discovery and verification of Maxwell's equations connected the same quantity with an entire system, electromagnetism. When the theory of special relativity emerged, the quantity came to be understood as the basis of causality. The speed of light is so fundamental it now defines the international unit of length. Relationship to units Numerical values Whereas the physical quantity indicated by a physical constant does not depend on the unit system used to express the quantity, the numerical values of dimensional physical constants do depend on choice of unit system. The term "physical constant" refers to the physical quantity, and not to the numerical value within any given system of units. For example, the speed of light is defined as having the numerical value of when expressed in the SI unit metres per second, and as having the numerical value of 1 when expressed in the natural units Planck length per Planck time. While its numerical value can be defined at will by the choice of units, the speed of light itself is a single physical constant. International System of Units Since 2019 revision, all of the units in the International System of Units have been defined in terms of fixed natural phenomena, including three fundamental constants: the speed of light in vacuum, c; the Planck constant, h; and the elementary charge, e. As a result of the new definitions, an SI unit like the kilogram can be written in terms of fundamental constants and one experimentally measured constant, ΔνCs: 1 kg = . Natural units It is possible to combine dimensional universal physical constants to define fixed quantities of any desired dimension, and this property has been used to construct various systems of natural units of measurement. Depending on the choice and arrangement of constants used, the resulting natural units may be convenient to an area of study. For example, Planck units, constructed from c, G, ħ, and kB give conveniently sized measurement units for use in studies of quantum gravity, and atomic units, constructed from ħ, me, e and 4πε0 give convenient units in atomic physics. The choice of constants used leads to widely varying quantities. Number of fundamental constants The number of fundamental physical constants depends on the physical theory accepted as "fundamental". Currently, this is the theory of general relativity for gravitation and the Standard Model for electromagnetic, weak and strong nuclear interactions and the matter fields. Between them, these theories account for a total of 19 independent fundamental constants. There is, however, no single "correct" way of enumerating them, as it is a matter of arbitrary choice which quantities are considered "fundamental" and which as "derived". Uzan lists 22 "fundamental constants of our standard model" as follows: the gravitational constant G, the speed of light c, the Planck constant h, the 9 Yukawa couplings for the quarks and leptons (equivalent to specifying the rest mass of these elementary particles), 2 parameters of the Higgs field potential, 4 parameters for the quark mixing matrix, 3 coupling constants for the gauge groups SU(3) × SU(2) × U(1) (or equivalently, two coupling constants and the Weinberg angle), a phase for the quantum chromodynamics vacuum. The number of 19 independent fundamental physical constants is subject to change under possible extensions of the Standard Model, notably by the introduction of neutrino mass (equivalent to seven additional constants, i.e. 3 Yukawa couplings and 4 lepton mixing parameters). The discovery of variability in any of these constants would be equivalent to the discovery of "new physics". The question as to which constants are "fundamental" is neither straightforward nor meaningless, but a question of interpretation of the physical theory regarded as fundamental; as pointed out by , not all physical constants are of the same importance, with some having a deeper role than others. proposed a classification schemes of three types of constants: A: physical properties of particular objects B: characteristic of a class of physical phenomena C: universal constants The same physical constant may move from one category to another as the understanding of its role deepens; this has notably happened to the speed of light, which was a class A constant (characteristic of light) when it was first measured, but became a class B constant (characteristic of electromagnetic phenomena) with the development of classical electromagnetism, and finally a class C constant with the discovery of special relativity. Tests on time-independence By definition, fundamental physical constants are subject to measurement, so that their being constant (independent on both the time and position of the performance of the measurement) is necessarily an experimental result and subject to verification. Paul Dirac in 1937 speculated that physical constants such as the gravitational constant or the fine-structure constant might be subject to change over time in proportion of the age of the universe. Experiments can in principle only put an upper bound on the relative change per year. For the fine-structure constant, this upper bound is comparatively low, at roughly 10−17 per year (as of 2008). The gravitational constant is much more difficult to measure with precision, and conflicting measurements in the 2000s have inspired the controversial suggestions of a periodic variation of its value in a 2015 paper. However, while its value is not known to great precision, the possibility of observing type Ia supernovae which happened in the universe's remote past, paired with the assumption that the physics involved in these events is universal, allows for an upper bound of less than 10−10 per year for the gravitational constant over the last nine billion years. Similarly, an upper bound of the change in the proton-to-electron mass ratio has been placed at 10−7 over a period of 7 billion years (or 10−16 per year) in a 2012 study based on the observation of methanol in a distant galaxy. It is problematic to discuss the proposed rate of change (or lack thereof) of a single dimensional physical constant in isolation. The reason for this is that the choice of units is arbitrary, making the question of whether a constant is undergoing change an artefact of the choice (and definition) of the units. For example, in SI units, the speed of light was given a defined value in 1983. Thus, it was meaningful to experimentally measure the speed of light in SI units prior to 1983, but it is not so now. Similarly, with effect from May 2019, the Planck constant has a defined value, such that all SI base units are now defined in terms of fundamental physical constants. With this change, the international prototype of the kilogram is being retired as the last physical object used in the definition of any SI unit. Tests on the immutability of physical constants look at dimensionless quantities, i.e. ratios between quantities of like dimensions, in order to escape this problem. Changes in physical constants are not meaningful if they result in an observationally indistinguishable universe. For example, a "change" in the speed of light c would be meaningless if accompanied by a corresponding change in the elementary charge e so that the expression (the fine-structure constant) remained unchanged. Dimensionless physical constants Any ratio between physical constants of the same dimensions results in a dimensionless physical constant, for example, the proton-to-electron mass ratio. The fine-structure constant α is the best known dimensionless fundamental physical constant. It is the value of the elementary charge squared expressed in Planck units. This value has become a standard example when discussing the derivability or non-derivability of physical constants. Introduced by Arnold Sommerfeld, its value and uncertainty as determined at the time was consistent with 1/137. This motivated Arthur Eddington (1929) to construct an argument why its value might be 1/137 precisely, which related to the Eddington number, his estimate of the number of protons in the Universe. By the 1940s, it became clear that the value of the fine-structure constant deviates significantly from the precise value of 1/137, refuting Eddington's argument. Fine-tuned universe Some physicists have explored the notion that if the dimensionless physical constants had sufficiently different values, our Universe would be so radically different that intelligent life would probably not have emerged, and that our Universe therefore seems to be fine-tuned for intelligent life. The anthropic principle states a logical truism: the fact of our existence as intelligent beings who can measure physical constants requires those constants to be such that beings like us can exist. There are a variety of interpretations of the constants' values, including that of a divine creator (the apparent fine-tuning is actual and intentional), or that the universe is one universe of many in a multiverse (e.g. the many-worlds interpretation of quantum mechanics), or even that, if information is an innate property of the universe and logically inseparable from consciousness, a universe without the capacity for conscious beings cannot exist. Table of physical constants The table below lists some frequently used constants and their CODATA recommended values. For a more extended list, refer to List of physical constants.
Physical sciences
Physical constants
Physics
23206
https://en.wikipedia.org/wiki/Parsley
Parsley
Parsley, or garden parsley (Petroselinum crispum), is a species of flowering plant in the family Apiaceae that is native to Greece, Morocco and the former Yugoslavia. It has been introduced and naturalized in Europe and elsewhere in the world with suitable climates, and is widely cultivated as an herb and a vegetable. It is believed to have been originally grown in Sardinia, and was cultivated in around the 3rd century BC. Linnaeus stated its wild habitat to be Sardinia, whence it was brought to England and apparently first cultivated in Britain in 1548, though literary evidence suggests parsley was used in England in the Middle Ages as early as the Anglo-Saxon period. Parsley is widely used in European, Middle Eastern, and American cuisine. Curly-leaf parsley is often used as a garnish. In central Europe, eastern Europe, and southern Europe, as well as in western Asia, many dishes are served with fresh green chopped parsley sprinkled on top. Flat-leaf parsley is similar, but is often preferred by chefs because it has a stronger flavor. Root parsley is very common in central, eastern, and southern European cuisines, where it is eaten as a snack, or as a vegetable in many soups, stews, and casseroles. Etymology The word "parsley" is a merger of Old English (which is identical to the contemporary German word for parsley: ) and the Old French . Both of these names are derived from Medieval Latin , from Latin , which is the latinization of the Greek , from and . Mycenaean Greek se-ri-no, in Linear B, is the earliest attested form of the word selinon. Description Garden parsley is a bright green, biennial plant in temperate climates, or an annual herb in subtropical and tropical areas. Where it grows as a biennial, in the first year, it forms a rosette of tripinnate leaves 10–25 cm long with numerous 1–3 cm leaflets, and a taproot used as a food store over the winter. In the second year, it grows a flowering stem to tall with sparser leaves and flat-topped 3–10 cm diameter umbels with numerous 2 mm diameter yellow to yellowish-green flowers. The seeds are ovoid, 2–3 mm long, with prominent style remnants at the apex. One of the compounds of the essential oil is apiole. The plant normally dies after seed maturation. Uses Culinary Parsley is widely used in Middle Eastern, Mediterranean, Brazilian, and American cuisine. Curly leaf parsley is used often as a garnish. Green parsley is used frequently as a garnish on potato dishes (boiled or mashed potatoes), on rice dishes (risotto or pilaf), on fish, fried chicken, lamb, goose, and steaks, as well as in meat or vegetable stews (including shrimp creole, beef bourguignon, goulash, or chicken paprikash). Parsley seeds are also used in cooking, imparting a stronger parsley flavor than leaves. Parsley, when consumed, is credited with neutralising odours associated with garlic in cooking. In central Europe, eastern Europe, and southern Europe, as well as in western Asia, many dishes are served with fresh green, chopped parsley sprinkled on top. In southern and central Europe, parsley is part of bouquet garni, a bundle of fresh herbs used as an ingredient in stocks, soups, and sauces. Freshly chopped green parsley is used as a topping for soups such as chicken soup, green salads, or salads such as salade Olivier, and on open sandwiches with cold cuts or pâtés. Persillade is a mixture of chopped garlic and chopped parsley in French cuisine. Parsley is the main ingredient in Italian salsa verde, which is a mixed condiment of parsley, capers, anchovies, garlic, and sometimes bread, soaked in vinegar. It is an Italian custom to serve it with bollito misto or fish. Gremolata, a mixture of parsley, garlic, and lemon zest, is a traditional accompaniment to the Italian veal stew, ossobuco alla milanese. Root parsley is very common in Central, Eastern, and Southern European cuisines, where it is used as a snack or a vegetable in many soups, stews, and casseroles, and as ingredient for broth. In Brazil, freshly chopped parsley () and freshly chopped scallion () are the main ingredients in the herb seasoning called (literally "green aroma"), which is used as key seasoning for major Brazilian dishes, including meat, chicken, fish, rice, beans, stews, soups, vegetables, salads, condiments, sauces, and stocks. is sold in food markets as a bundle of both types of fresh herbs. In some Brazilian regions, chopped parsley may be replaced by chopped coriander (also called cilantro, in Portuguese) in the mixture. Parsley is a key ingredient in several Middle Eastern salads such as Lebanese tabbouleh; it is also often mixed in with the chickpeas and/or fava beans while making falafel (that gives the inside of the falafel its green color). It is also a main component of the Iranian stew ghormeh sabzi. Parsley is a component of a standard Seder plate arrangement, it is eaten to symbolize the flourishing of the Jews after first arriving in Egypt. Composition Nutritional content Parsley is a source of flavonoids and antioxidants, especially luteolin, apigenin, folate, vitamin K, vitamin C, and vitamin A. Half a tablespoon (a gram) of dried parsley contains about 6.0 μg of lycopene and 10.7 μg of alpha carotene as well as 82.9 μg of lutein+zeaxanthin and 80.7 μg of beta carotene. Dried parsley can contain about 45 mg/gram apigenin. The apigenin content of fresh parsley is reportedly 215.5 mg/100 grams, which is much higher than the next highest food source, green celery hearts providing 19.1 mg/100 grams. Parsley essential oil is high in myristicin. Precautions Excessive consumption of parsley should be avoided by pregnant women. Normal food quantities are safe for pregnant women, but consuming excessively large amounts may have uterotonic effects. Cultivation Parsley grows best in moist, well-drained soil, with full sun. It grows best between , and usually is grown from seed. Germination is slow, taking four to six weeks, and it often is difficult because of furanocoumarins in its seed coat. Typically, plants grown for the leaf crop are spaced 10 cm apart, while those grown as a root crop are spaced 20 cm apart to allow for the root development. Parsley attracts several species of wildlife. Some swallowtail butterflies use parsley as a host plant for their larvae; their caterpillars are black and green striped with yellow dots, and will feed on parsley for two weeks before turning into butterflies. Bees and other nectar-feeding insects also visit the flowers. Cultivars Parsley is subdivided into several cultivar groups. Often these are treated as botanical varieties, despite being cultivated selections, not of natural botanical origin. Leaf parsley The two main groups of parsley used as herbs are French, or curly leaf (P. crispum Crispum Group; syn. P. crispum var. crispum); and, Italian, or flat leaf (P. crispum Neapolitanum Group; syn. P. crispum var. neapolitanum). Flat-leaved parsley is preferred by some gardeners as it is easier to cultivate, being more tolerant of both rain and sunshine, and is said to have a stronger flavor—although this is disputed—while curly leaf parsley is preferred by others because of its more decorative appearance in garnishing. A third type, sometimes grown in southern Italy, has thick leaf stems resembling celery. Root parsley Another type of parsley is grown as a root vegetable, the Hamburg root parsley (P. crispum Radicosum Group, syn. P. crispum var. tuberosum). This type of parsley produces much thicker roots than types cultivated for their leaves. Although seldom used in Britain and the United States, root parsley is common in central and eastern European cuisine, where it is used in soups and stews, or simply eaten raw, as a snack (similar to carrots). Although root parsley looks similar to the parsnip, which is among its closest relatives in the family Apiaceae, its taste is quite different. Gallery
Biology and health sciences
Apiales
null
23209
https://en.wikipedia.org/wiki/Peppermint
Peppermint
Peppermint (Mentha × piperita) is a hybrid species of mint, a cross between watermint and spearmint. Indigenous to Europe and the Middle East, the plant is now widely spread and cultivated in many regions of the world. It is occasionally found in the wild with its parent species. Although the genus Mentha comprises more than 25 species, the one in most common use is peppermint. While Western peppermint is derived from Mentha × piperita, Chinese peppermint, or bohe, is derived from the fresh leaves of M. haplocalyx. M. × piperita and M. haplocalyx are both recognized as plant sources of menthol and menthone, and are among the oldest herbs used for both culinary and medicinal products. Botany Peppermint was first identified in Hertfordshire, England, by a Dr. Eales, a discovery which John Ray published 1696 in the second edition of his book Synopsis Methodica Stirpium Britannicarum. He initially gave it the name Mentha spicis brevioribus et habitioribus, foliis Mentha fusca, sapore fervido piperis and later in his 1704 volume Historia Plantarum he called it Mentha palustris or Peper–Mint. The plant was then added to the London Pharmacopoeia under the name Mentha piperitis sapore in 1721. It was given the name Mentha piperita in 1753 by Carl Linnaeus in his Species Plantarum Volume 2. Linnaeus treated peppermint as a species, but it is now universally agreed to be a hybrid between Mentha viridis and Mentha aquatica with Mentha viridis itself also being a hybrid between Mentha sylvestris and Mentha rotundifolis. Peppermint is an herbaceous, rhizomatous, perennial plant that grows to be tall, with smooth stems, square in cross section. The rhizomes are wide-spreading and fleshy, and bear fibrous roots. The leaves can be long and broad. They are dark green with reddish veins, with an acute apex and coarsely toothed margins. The leaves and stems are usually slightly fuzzy. The flowers are purple, long, with a four-lobed corolla about diameter; they are produced in whorls (verticillasters) around the stem, forming thick, blunt spikes. Flowering season lasts from mid- to late summer. The chromosome number is variable, with 2n counts of 66, 72, 84, and 120 recorded. Peppermint is a fast-growing plant, spreading quickly once it has sprouted. Ecology Peppermint typically occurs in moist habitats, including stream sides and drainage ditches. Being a hybrid, it is usually sterile, producing no seeds and reproducing only vegetatively, spreading by its runners. Outside of its native range, areas where peppermint was formerly grown for oil often have an abundance of feral plants, and it is considered invasive in Australia, the Galápagos Islands, New Zealand, and the United States in the Great Lakes region, noted since 1843. Cultivation Peppermint generally grows best in moist, shaded locations, and expands by underground rhizomes. Young shoots are taken from old stocks and dibbled into the ground about 0.5 m (1.5 ft) apart. They grow quickly and cover the ground with runners if it is permanently moist. For the home gardener, it is often grown in containers to restrict rapid spreading. It grows best with a good supply of water, without being water-logged, and planted in areas with partial sun to shade. The leaves and flowering tops are used; they are collected as soon as the flowers begin to open and can be dried. The wild form of the plant is less suitable for this purpose, with cultivated plants having been selected for more and better oil content. They may be allowed to lie and wilt a little before distillation, or they may be taken directly to the still. Cultivars Several cultivars have been selected for garden use: Mentha × piperita 'Candymint' has reddish stems. Mentha × piperita 'Chocolate Mint'. Its flowers open from the bottom up; its flavour is reminiscent of the flavour in Andes Chocolate Mints, a popular confection. Mentha × piperita 'Citrata' includes a number of varieties including Eau de Cologne mint, grapefruit mint, lemon mint, and orange mint. Its leaves are aromatic and hairless. Mentha × piperita 'Crispa' has wrinkled leaves. Mentha × piperita 'Lavender Mint' Mentha × piperita 'Lime Mint' has lime-scented foliage. Mentha × piperita 'Variegata' has mottled green and pale yellow leaves. Commercial cultivars may include: Dulgo pole Zefir Bulgarian population #2 Clone 11-6-22 Clone 80-121-33 Mitcham Digne 38 Mitcham Ribecourt 19 'Todd's Mitcham', a verticillium wilt-resistant cultivar produced from a breeding and test program of atomic gardening at Brookhaven National Laboratory from the mid-1950s 'Refined Murray', also verticillium-resistant 'Roberts Mitcham', also verticillium-resistant and also the product of mutation breeding Diseases Verticillium wilt is a major constraint in peppermint cultivation. 'Todd's Mitcham', 'Refined Murray', 'Roberts Mitcham' (see above), and a few other cultivars have some degree of resistance. Production In 2022, world production of peppermint was 51,081 tonnes, led by Morocco with 84% of the total and Argentina with 14% (table). In the United States, Oregon and Washington produce most of the country's peppermint, the leaves of which are processed for the essential oil to produce flavorings mainly for chewing gum and toothpaste. Chemical constituents Peppermint has a high menthol content. The essential oil also contains menthone and carboxyl esters, particularly menthyl acetate. Dried peppermint typically has 0.3–0.4% of volatile oil containing menthol (7–48%), menthone (20–46%), menthyl acetate (3–10%), menthofuran (1–17%), and 1,8-cineol (3–6%). Peppermint oil also contains small amounts of many additional compounds, including limonene, pulegone, caryophyllene, and pinene. Peppermint contains terpenoids and flavonoids such as eriocitrin, hesperidin, and kaempferol 7-O-rutinoside. Oil Peppermint oil has a high concentration of natural pesticides, mainly pulegone (found mainly in M. arvensis var. piperascens (cornmint, field mint, or Japanese mint), and to a lesser extent (6,530 ppm) in Mentha × piperita subsp. notho) and menthone. It is known to repel some pest insects, including mosquitos, and has uses in organic gardening. It is also widely used to repel rodents. The chemical composition of the essential oil from peppermint (Mentha × piperita L.) was analyzed by GC/FID and GC-MS. The main constituents were menthol (40.7%) and menthone (23.4%). Further components were (±)-menthyl acetate, 1,8-cineole, limonene, beta-pinene, and beta-caryophyllene. Research and health effects Peppermint oil is under preliminary research for its potential as a short-term treatment for irritable bowel syndrome, and has supposed uses in traditional medicine for minor ailments. Peppermint oil and leaves have a cooling effect when used topically for muscle pain, nerve pain, relief from itching, or as a fragrance. High oral doses of peppermint oil (500 mg) can cause mucosal irritation and mimic heartburn. Peppermint roots bioaccumulate radium, so the plant may be effective for phytoremediation of radioactively contaminated soil. Culinary and other uses Fresh or dried peppermint leaves are often used alone in peppermint tea or with other herbs in herbal teas (tisanes, infusions). Peppermint is used for flavouring ice cream, candy, fruit preserves, alcoholic beverages, chewing gum, toothpaste, and some shampoos, soaps, and skin care products. Menthol activates cold-sensitive TRPM8 receptors in the skin and mucosal tissues, and is the primary source of the cooling sensation that follows the topical application of peppermint oil. Peppermint oil is also used in construction and plumbing to test for the tightness of pipes and disclose leaks by its odor. Safety Medicinal uses of peppermint have not been approved as effective or safe by the US Food and Drug Administration. With caution that the concentration of the peppermint constituent pulegone should not exceed 1% (140 mg), peppermint preparations are considered safe by the European Medicines Agency when used in topical formulations for adult subjects. Diluted peppermint essential oil is safe for oral intake when only a few drops are used. Although peppermint is commonly available as a herbal supplement, no established, consistent manufacturing standards exist for it, and some peppermint products may be contaminated with toxic metals or other substituted compounds. Skin rashes, irritation, or allergic reactions may result from applying peppermint oil to the skin, and its use on the face or chest of young children may cause side effects if the oil menthol is inhaled. A common side effect from oral intake of peppermint oil or capsules is heartburn. Oral use of peppermint products may have adverse effects when used with iron supplements, cyclosporine, medicines for heart conditions or high blood pressure, or medicines to decrease stomach acid. Standardization ISO 676:1995—contains the information about the nomenclature of the variety and cultivars ISO 5563:1984—a specification for its dried leaves of Mentha piperita Linnaeus Peppermint oil—ISO 856:2006
Biology and health sciences
Herbs and spices
Plants
23212
https://en.wikipedia.org/wiki/Poales
Poales
The Poales are a large order of flowering plants in the monocotyledons, and includes families of plants such as the grasses, bromeliads, rushes and sedges. Sixteen plant families are currently recognized by botanists to be part of Poales. Description The flowers are typically small, enclosed by bracts, and arranged in inflorescences (except in three species of the genus Mayaca, which possess very reduced, one-flowered inflorescences). The flowers of many species are wind pollinated; the seeds usually contain starch. Taxonomy The APG III system (2009) accepts the order within a monocot clade called commelinids, and accepts the following 16 families: The earlier APG system (1998) adopted the same placement of the order, although it used the spelling "commelinoids". It did not include the Bromeliaceae and Mayaceae, but had the additional families Prioniaceae (now included in Thurniaceae), Sparganiaceae (now in Typhaceae), and Hydatellaceae (now transferred out of the monocots; recently discovered to be an 'early-diverging' lineage of flowering plants). The morphology-based Cronquist system did not include an order named Poales, assigning these families to the orders Bromeliales, Cyperales, Hydatellales, Juncales, Restionales and Typhales. In early systems, an order including the grass family did not go by the name Poales but by a descriptive botanical name such as Graminales in the Engler system (update of 1964) and in the Hutchinson system (first edition, first volume, 1926), Glumiflorae in the Wettstein system (last revised 1935) or Glumaceae in the Bentham & Hooker system (third volume, 1883). Evolution and phylogeny The earliest fossils attributed to the Poales date to the late Cretaceous period about million years ago, though some studies (e.g., Bremer, 2002) suggest the origin of the group may extend to nearly 115 million years ago, likely in South America. The earliest known fossils include pollen and fruits. The phylogenetic position of Poales within the commelinids was difficult to resolve, but an analysis using complete chloroplast DNA found support for Poales as sister group of Commelinales plus Zingiberales. Major lineages within the Poales have been referred to as bromeliad, cyperid, xyrid, graminid, and restiid clades. A phylogenetic analysis resolved most relationships within the order but found weak support for the monophyly of the cyperid clade. The relationship between Centrolepidaceae and Restoniaceae within the restiid clade remains unclear; the first may actually be embedded in the latter. Diversity The four most species-rich families in the order are: Poaceae: 12,070 species Cyperaceae: 5,500 species Bromeliaceae: 3,170 species Eriocaulaceae: 1,150 species Historic taxonomy Cyperales Cyperales was a name for an order of flowering plants. As used in the Engler system (update, of 1964) and in the Wettstein system it consisted of only the single family. In the Cronquist system it is used for an order (placed in subclass Commelinidae) and circumscribed as (1981): order Cyperales family Cyperaceae family Poaceae (or Gramineae) The APG system now assigns the plants involved to the order Poales. Eriocaulales Eriocaulales is a botanical name for an order of flowering plants. The name was published by Takenoshin Nakai. In the Cronquist system the name was used for an order placed in the subclass Commelinidae. The order consisted of one family only (1981): order Eriocaulales family Eriocaulaceae The APG IV system now assigns these plants to the order Poales. Uses The Poales are the most economically important order of monocots and possibly the most important order of plants in general. Within the order, by far the most important family economically is the family of grasses (Poaceae, syn. Gramineae), which includes the starch staples barley, maize, millet, rice, and wheat as well as bamboos (mostly used structurally, like wood, but somewhat as vegetables), and a few "seasonings" like sugarcane and lemongrass. Graminoids, especially the grasses, are typically dominant in open (low moisture but not yet arid, or also fire climax) habitats like prairie/steppe and savannah and thus form a large proportion of the forage of grazing livestock. Possibly due to pastoral nostalgia or simply a desire for open areas for play, they dominate most Western yards as lawns, which consume vast sums of money in upkeep (artificial grazing—mowing—for aesthetics and to keep the allergenic flowers suppressed, irrigation, and fertilizer). Many Bromeliaceae are used as ornamental plants (and one, the pineapple, is internationally grown in the tropics for fruit). Many wetland species of sedges, rushes, grasses, and cattails are important habitat plants for waterfowl, are used in weaving chair seats, and (especially cattails) were important pre-agricultural food sources for man. Two sedges, chufa (Cyperus esculentus, also a significant weed) and water chestnut (Eleocharis dulcis) are still at least locally important wetland starchy root crops.
Biology and health sciences
Poales
Plants
23219
https://en.wikipedia.org/wiki/Ploidy
Ploidy
Ploidy () is the number of complete sets of chromosomes in a cell, and hence the number of possible alleles for autosomal and pseudoautosomal genes. Here sets of chromosomes refers to the number of maternal and paternal chromosome copies, respectively, in each homologous chromosome pair—the form in which chromosomes naturally exist. Somatic cells, tissues, and individual organisms can be described according to the number of sets of chromosomes present (the "ploidy level"): monoploid (1 set), diploid (2 sets), triploid (3 sets), tetraploid (4 sets), pentaploid (5 sets), hexaploid (6 sets), heptaploid or septaploid (7 sets), etc. The generic term polyploid is often used to describe cells with three or more sets of chromosomes. Virtually all sexually reproducing organisms are made up of somatic cells that are diploid or greater, but ploidy level may vary widely between different organisms, between different tissues within the same organism, and at different stages in an organism's life cycle. Half of all known plant genera contain polyploid species, and about two-thirds of all grasses are polyploid. Many animals are uniformly diploid, though polyploidy is common in invertebrates, reptiles, and amphibians. In some species, ploidy varies between individuals of the same species (as in the social insects), and in others entire tissues and organ systems may be polyploid despite the rest of the body being diploid (as in the mammalian liver). For many organisms, especially plants and fungi, changes in ploidy level between generations are major drivers of speciation. In mammals and birds, ploidy changes are typically fatal. There is, however, evidence of polyploidy in organisms now considered to be diploid, suggesting that polyploidy has contributed to evolutionary diversification in plants and animals through successive rounds of polyploidization and rediploidization. Humans are diploid organisms, normally carrying two complete sets of chromosomes in their somatic cells: one copy of paternal and maternal chromosomes, respectively, in each of the 23 homologous pairs of chromosomes that humans normally have. This results in two homologous pairs within each of the 23 homologous pairs, providing a full complement of 46 chromosomes. This total number of individual chromosomes (counting all complete sets) is called the chromosome number or chromosome complement. The number of chromosomes found in a single complete set of chromosomes is called the monoploid number (x). The haploid number (n) refers to the total number of chromosomes found in a gamete (a sperm or egg cell produced by meiosis in preparation for sexual reproduction). Under normal conditions, the haploid number is exactly half the total number of chromosomes present in the organism's somatic cells, with one paternal and maternal copy in each chromosome pair. For diploid organisms, the monoploid number and haploid number are equal; in humans, both are equal to 23. When a human germ cell undergoes meiosis, the diploid 46 chromosome complement is split in half to form haploid gametes. After fusion of a male and a female gamete (each containing 1 set of 23 chromosomes) during fertilization, the resulting zygote again has the full complement of 46 chromosomes: 2 sets of 23 chromosomes. Euploidy and aneuploidy describe having a number of chromosomes that is an exact multiple of the number of chromosomes in a normal gamete; and having any other number, respectively. For example, a person with Turner syndrome may be missing one sex chromosome (X or Y), resulting in a (45,X) karyotype instead of the usual (46,XX) or (46,XY). This is a type of aneuploidy and cells from the person may be said to be aneuploid with a (diploid) chromosome complement of 45. Etymology The term ploidy is a back-formation from haploidy and diploidy. "Ploid" is a combination of Ancient Greek -πλόος (-plóos, "-fold") and -ειδής (-eidḗs), from εἶδος (eîdos, "form, likeness"). The principal meaning of the Greek word ᾰ̔πλόος (haplóos) is "single", from ἁ- (ha-, "one, same"). διπλόος (diplóos) means "duplex" or "two-fold". Diploid therefore means "duplex-shaped" (compare "humanoid", "human-shaped"). Polish-German botanist Eduard Strasburger coined the terms haploid and diploid in 1905. Some authors suggest that Strasburger based the terms on August Weismann's conception of the id (or germ plasm), hence haplo-id and diplo-id. The two terms were brought into the English language from German through William Henry Lang's 1908 translation of a 1906 textbook by Strasburger and colleagues. Types of ploidy Haploid and monoploid The term haploid is used with two distinct but related definitions. In the most generic sense, haploid refers to having the number of sets of chromosomes normally found in a gamete. Because two gametes necessarily combine during sexual reproduction to form a single zygote from which somatic cells are generated, healthy gametes always possess exactly half the number of sets of chromosomes found in the somatic cells, and therefore "haploid" in this sense refers to having exactly half the number of sets of chromosomes found in a somatic cell. By this definition, an organism whose gametic cells contain a single copy of each chromosome (one set of chromosomes) may be considered haploid while the somatic cells, containing two copies of each chromosome (two sets of chromosomes), are diploid. This scheme of diploid somatic cells and haploid gametes is widely used in the animal kingdom and is the simplest to illustrate in diagrams of genetics concepts. But this definition also allows for haploid gametes with more than one set of chromosomes. As given above, gametes are by definition haploid, regardless of the actual number of sets of chromosomes they contain. An organism whose somatic cells are tetraploid (four sets of chromosomes), for example, will produce gametes by meiosis that contain two sets of chromosomes. These gametes might still be called haploid even though they are numerically diploid. An alternative usage defines "haploid" as having a single copy of each chromosome – that is, one and only one set of chromosomes. In this case, the nucleus of a eukaryotic cell is said to be haploid only if it has a single set of chromosomes, each one not being part of a pair. By extension a cell may be called haploid if its nucleus has one set of chromosomes, and an organism may be called haploid if its body cells (somatic cells) have one set of chromosomes per cell. By this definition haploid therefore would not be used to refer to the gametes produced by the tetraploid organism in the example above, since these gametes are numerically diploid. The term monoploid is often used as a less ambiguous way to describe a single set of chromosomes; by this second definition, haploid and monoploid are identical and can be used interchangeably. Gametes (sperm and ova) are haploid cells. The haploid gametes produced by most organisms combine to form a zygote with n pairs of chromosomes, i.e. 2n chromosomes in total. The chromosomes in each pair, one of which comes from the sperm and one from the egg, are said to be homologous. Cells and organisms with pairs of homologous chromosomes are called diploid. For example, most animals are diploid and produce haploid gametes. During meiosis, sex cell precursors have their number of chromosomes halved by randomly "choosing" one member of each pair of chromosomes, resulting in haploid gametes. Because homologous chromosomes usually differ genetically, gametes usually differ genetically from one another. All plants and many fungi and algae switch between a haploid and a diploid state, with one of the stages emphasized over the other. This is called alternation of generations. Most fungi and algae are haploid during the principal stage of their life cycle, as are some primitive plants like mosses. More recently evolved plants, like the gymnosperms and angiosperms, spend the majority of their life cycle in the diploid stage. Most animals are diploid, but male bees, wasps, and ants are haploid organisms because they develop from unfertilized, haploid eggs, while females (workers and queens) are diploid, making their system haplodiploid. In some cases there is evidence that the n chromosomes in a haploid set have resulted from duplications of an originally smaller set of chromosomes. This "base" number – the number of apparently originally unique chromosomes in a haploid set – is called the monoploid number, also known as basic or cardinal number, or fundamental number. As an example, the chromosomes of common wheat are believed to be derived from three different ancestral species, each of which had 7 chromosomes in its haploid gametes. The monoploid number is thus 7 and the haploid number is 3 × 7 = 21. In general n is a multiple of x. The somatic cells in a wheat plant have six sets of 7 chromosomes: three sets from the egg and three sets from the sperm which fused to form the plant, giving a total of 42 chromosomes. As a formula, for wheat 2n = 6x = 42, so that the haploid number n is 21 and the monoploid number x is 7. The gametes of common wheat are considered to be haploid, since they contain half the genetic information of somatic cells, but they are not monoploid, as they still contain three complete sets of chromosomes (n = 3x). In the case of wheat, the origin of its haploid number of 21 chromosomes from three sets of 7 chromosomes can be demonstrated. In many other organisms, although the number of chromosomes may have originated in this way, this is no longer clear, and the monoploid number is regarded as the same as the haploid number. Thus in humans, x = n = 23. Diploid Diploid cells have two homologous copies of each chromosome, usually one from the mother and one from the father. All or nearly all mammals are diploid organisms. The suspected tetraploid (possessing four-chromosome sets) plains viscacha rat (Tympanoctomys barrerae) and golden viscacha rat (Pipanacoctomys aureus) have been regarded as the only known exceptions (as of 2004). However, some genetic studies have rejected any polyploidism in mammals as unlikely, and suggest that amplification and dispersion of repetitive sequences best explain the large genome size of these two rodents. All normal diploid individuals have some small fraction of cells that display polyploidy. Human diploid cells have 46 chromosomes (the somatic number, 2n) and human haploid gametes (egg and sperm) have 23 chromosomes (n). Retroviruses that contain two copies of their RNA genome in each viral particle are also said to be diploid. Examples include human foamy virus, human T-lymphotropic virus, and HIV. Polyploidy Polyploidy is the state where all cells have multiple sets of chromosomes beyond the basic set, usually 3 or more. Specific terms are triploid (3 sets), tetraploid (4 sets), pentaploid (5 sets), hexaploid (6 sets), heptaploid or septaploid (7 sets), octoploid (8 sets), nonaploid (9 sets), decaploid (10 sets), undecaploid (11 sets), dodecaploid (12 sets), tridecaploid (13 sets), tetradecaploid (14 sets), etc. Some higher ploidies include hexadecaploid (16 sets), dotriacontaploid (32 sets), and tetrahexacontaploid (64 sets), though Greek terminology may be set aside for readability in cases of higher ploidy (such as "16-ploid"). Polytene chromosomes of plants and fruit flies can be 1024-ploid. Ploidy of systems such as the salivary gland, elaiosome, endosperm, and trophoblast can exceed this, up to 1048576-ploid in the silk glands of the commercial silkworm Bombyx mori. The chromosome sets may be from the same species or from closely related species. In the latter case, these are known as allopolyploids (or amphidiploids, which are allopolyploids that behave as if they were normal diploids). Allopolyploids are formed from the hybridization of two separate species. In plants, this probably most often occurs from the pairing of meiotically unreduced gametes, and not by diploid–diploid hybridization followed by chromosome doubling. The so-called Brassica triangle is an example of allopolyploidy, where three different parent species have hybridized in all possible pair combinations to produce three new species. Polyploidy occurs commonly in plants, but rarely in animals. Even in diploid organisms, many somatic cells are polyploid due to a process called endoreduplication, where duplication of the genome occurs without mitosis (cell division). The extreme in polyploidy occurs in the fern genus Ophioglossum, the adder's-tongues, in which polyploidy results in chromosome counts in the hundreds, or, in at least one case, well over one thousand. It is possible for polyploid organisms to revert to lower ploidy by haploidisation. In bacteria and archaea Polyploidy is a characteristic of the bacterium Deinococcus radiodurans and of the archaeon Halobacterium salinarum. These two species are highly resistant to ionizing radiation and desiccation, conditions that induce DNA double-strand breaks. This resistance appears to be due to efficient homologous recombinational repair. Variable or indefinite ploidy Depending on growth conditions, prokaryotes such as bacteria may have a chromosome copy number of 1 to 4, and that number is commonly fractional, counting portions of the chromosome partly replicated at a given time. This is because under exponential growth conditions the cells are able to replicate their DNA faster than they can divide. In ciliates, the macronucleus is called ampliploid, because only part of the genome is amplified. Mixoploidy Mixoploidy is the case where two cell lines, one diploid and one polyploid, coexist within the same organism. Though polyploidy in humans is not viable, mixoploidy has been found in live adults and children. There are two types: diploid-triploid mixoploidy, in which some cells have 46 chromosomes and some have 69, and diploid-tetraploid mixoploidy, in which some cells have 46 and some have 92 chromosomes. It is a major topic of cytology. Dihaploidy and polyhaploidy Dihaploid and polyhaploid cells are formed by haploidisation of polyploids, i.e., by halving the chromosome constitution. Dihaploids (which are diploid) are important for selective breeding of tetraploid crop plants (notably potatoes), because selection is faster with diploids than with tetraploids. Tetraploids can be reconstituted from the diploids, for example by somatic fusion. The term "dihaploid" was coined by Bender to combine in one word the number of genome copies (diploid) and their origin (haploid). The term is well established in this original sense, but it has also been used for doubled monoploids or doubled haploids, which are homozygous and used for genetic research. Euploidy and aneuploidy Euploidy (Greek eu, "true" or "even") is the state of a cell or organism having one or more than one set of the same set of chromosomes, possibly excluding the sex-determining chromosomes. For example, most human cells have 2 of each of the 23 homologous monoploid chromosomes, for a total of 46 chromosomes. A human cell with one extra set of the 23 normal chromosomes (functionally triploid) would be considered euploid. Euploid karyotypes would consequentially be a multiple of the haploid number, which in humans is 23. Aneuploidy is the state where one or more individual chromosomes of a normal set are absent or present in more than their usual number of copies (excluding the absence or presence of complete sets, which is considered euploidy). Unlike euploidy, aneuploid karyotypes will not be a multiple of the haploid number. In humans, examples of aneuploidy include having a single extra chromosome (as in Down syndrome, where affected individuals have three copies of chromosome 21) or missing a chromosome (as in Turner syndrome, where affected individuals have only one sex chromosome). Aneuploid karyotypes are given names with the suffix -somy (rather than -ploidy, used for euploid karyotypes), such as trisomy and monosomy. Homoploid Homoploid means "at the same ploidy level", i.e. having the same number of homologous chromosomes. For example, homoploid hybridization is hybridization where the offspring have the same ploidy level as the two parental species. This contrasts with a common situation in plants where chromosome doubling accompanies or occurs soon after hybridization. Similarly, homoploid speciation contrasts with polyploid speciation. Zygoidy and azygoidy Zygoidy is the state in which the chromosomes are paired and can undergo meiosis. The zygoid state of a species may be diploid or polyploid. In the azygoid state the chromosomes are unpaired. It may be the natural state of some asexual species or may occur after meiosis. In diploid organisms the azygoid state is monoploid. (See below for dihaploidy.) Special cases More than one nucleus per cell In the strictest sense, ploidy refers to the number of sets of chromosomes in a single nucleus rather than in the cell as a whole. Because in most situations there is only one nucleus per cell, it is commonplace to speak of the ploidy of a cell, but in cases in which there is more than one nucleus per cell, more specific definitions are required when ploidy is discussed. Authors may at times report the total combined ploidy of all nuclei present within the cell membrane of a syncytium, though usually the ploidy of each nucleus is described individually. For example, a fungal dikaryon with two separate haploid nuclei is distinguished from a diploid cell in which the chromosomes share a nucleus and can be shuffled together. Ancestral ploidy levels It is possible on rare occasions for ploidy to increase in the germline, which can result in polyploid offspring and ultimately polyploid species. This is an important evolutionary mechanism in both plants and animals and is known as a primary driver of speciation. As a result, it may become desirable to distinguish between the ploidy of a species or variety as it presently breeds and that of an ancestor. The number of chromosomes in the ancestral (non-homologous) set is called the monoploid number (x), and is distinct from the haploid number (n) in the organism as it now reproduces. Common wheat (Triticum aestivum) is an organism in which x and n differ. Each plant has a total of six sets of chromosomes (with two sets likely having been obtained from each of three different diploid species that are its distant ancestors). The somatic cells are hexaploid, 2n = 6x = 42 (where the monoploid number x = 7 and the haploid number n = 21). The gametes are haploid for their own species, but triploid, with three sets of chromosomes, by comparison to a probable evolutionary ancestor, einkorn wheat. Tetraploidy (four sets of chromosomes, 2n = 4x) is common in many plant species, and also occurs in amphibians, reptiles, and insects. For example, species of Xenopus (African toads) form a ploidy series, featuring diploid (X. tropicalis, 2n=20), tetraploid (X. laevis, 4n=36), octaploid (X. wittei, 8n=72), and dodecaploid (X. ruwenzoriensis, 12n=108) species. Over evolutionary time scales in which chromosomal polymorphisms accumulate, these changes become less apparent by karyotype – for example, humans are generally regarded as diploid, but the 2R hypothesis has confirmed two rounds of whole genome duplication in early vertebrate ancestors. Haplodiploidy Ploidy can also vary between individuals of the same species or at different stages of the life cycle. In some insects it differs by caste. In humans, only the gametes are haploid, but in many of the social insects, including ants, bees, and termites, males develop from unfertilized eggs, making them haploid for their entire lives, even as adults. In the Australian bulldog ant, Myrmecia pilosula, a haplodiploid species, haploid individuals of this species have a single chromosome and diploid individuals have two chromosomes. In Entamoeba, the ploidy level varies from 4n to 40n in a single population. Alternation of generations occurs in most plants, with individuals "alternating" ploidy level between different stages of their sexual life cycle. Tissue-specific polyploidy In large multicellular organisms, variations in ploidy level between different tissues, organs, or cell lineages are common. Because the chromosome number is generally reduced only by the specialized process of meiosis, the somatic cells of the body inherit and maintain the chromosome number of the zygote by mitosis. However, in many situations somatic cells double their copy number by means of endoreduplication as an aspect of cellular differentiation. For example, the hearts of two-year-old human children contain 85% diploid and 15% tetraploid nuclei, but by 12 years of age the proportions become approximately equal, and adults examined contained 27% diploid, 71% tetraploid and 2% octaploid nuclei. Adaptive and ecological significance of variation in ploidy There is continued study and debate regarding the fitness advantages or disadvantages conferred by different ploidy levels. A study comparing the karyotypes of endangered or invasive plants with those of their relatives found that being polyploid as opposed to diploid is associated with a 14% lower risk of being endangered, and a 20% greater chance of being invasive. Polyploidy may be associated with increased vigor and adaptability. Some studies suggest that selection is more likely to favor diploidy in host species and haploidy in parasite species. However, polyploidization is associated with an increase in transposable element content and relaxed purifying selection on recessive deleterious alleles. When a germ cell with an uneven number of chromosomes undergoes meiosis, the chromosomes cannot be evenly divided between the daughter cells, resulting in aneuploid gametes. Triploid organisms, for instance, are usually sterile. Because of this, triploidy is commonly exploited in agriculture to produce seedless fruit such as bananas and watermelons. If the fertilization of human gametes results in three sets of chromosomes, the condition is called triploid syndrome. In unicellular organisms the ploidy nutrient limitation hypothesis suggests that nutrient limitation should encourage haploidy in preference to higher ploidies. This hypothesis is due to the higher surface-to-volume ratio of haploids, which eases nutrient uptake, thereby increasing the internal nutrient-to-demand ratio. Mable 2001 finds Saccharomyces cerevisiae to be somewhat inconsistent with this hypothesis however, as haploid growth is faster than diploid under high nutrient conditions. The NLH is also tested in haploid, diploid, and polyploid fungi by Gerstein et al. 2017. This result is also more complex: On the one hand, under phosphorus and other nutrient limitation, lower ploidy is selected as expected. However under normal nutrient levels or under limitation of only nitrogen, higher ploidy was selected. Thus the NLH and more generally, the idea that haploidy is selected by harsher conditions is cast into doubt by these results. Older WGDs have also been investigated. Only as recently as 2015 was the ancient whole genome duplication in Baker's yeast proven to be allopolyploid, by Marcet-Houben and Gabaldón 2015. It still remains to be explained why there are not more polyploid events in fungi, and the place of neopolyploidy and mesopolyploidy in fungal history. Less efficient natural selection in diploid compared to haploid tissue The concept that those genes of an organism that are expressed exclusively in the diploid stage are under less efficient natural selection than those genes expressed in the haploid stage is referred to as the “masking theory”. Evidence in support of this masking theory has been reported in studies of the single-celled yeast Saccharomyces cerevisiae. In further support of the masking theory, evidence of strong purifying selection in haploid tissue-specific genes has been reported for the plant Scots Pine. Glossary of ploidy numbers The common potato (Solanum tuberosum) is an example of a tetraploid organism, carrying four sets of chromosomes. During sexual reproduction, each potato plant inherits two sets of 12 chromosomes from the pollen parent, and two sets of 12 chromosomes from the ovule parent. The four sets combined provide a full complement of 48 chromosomes. The haploid number (half of 48) is 24. The monoploid number equals the total chromosome number divided by the ploidy level of the somatic cells: 48 chromosomes in total divided by a ploidy level of 4 equals a monoploid number of 12. Hence, the monoploid number (12) and haploid number (24) are distinct in this example. However, commercial potato crops (as well as many other crop plants) are commonly propagated vegetatively (by asexual reproduction through mitosis), in which case new individuals are produced from a single parent, without the involvement of gametes and fertilization, and all the offspring are genetically identical to each other and to the parent, including in chromosome number. The parents of these vegetative clones may still be capable of producing haploid gametes in preparation for sexual reproduction, but these gametes are not used to create the vegetative offspring by this route. Specific examples
Biology and health sciences
Genetics
Biology
23226
https://en.wikipedia.org/wiki/Permian
Permian
The Permian ( ) is a geologic period and stratigraphic system which spans 47 million years from the end of the Carboniferous Period million years ago (Mya), to the beginning of the Triassic Period 251.902 Mya. It is the sixth and last period of the Paleozoic Era; the following Triassic Period belongs to the Mesozoic Era. The concept of the Permian was introduced in 1841 by geologist Sir Roderick Murchison, who named it after the region of Perm in Russia. The Permian witnessed the diversification of the two groups of amniotes, the synapsids and the sauropsids (reptiles). The world at the time was dominated by the supercontinent Pangaea, which had formed due to the collision of Euramerica and Gondwana during the Carboniferous. Pangaea was surrounded by the superocean Panthalassa. The Carboniferous rainforest collapse left behind vast regions of desert within the continental interior. Amniotes, which could better cope with these drier conditions, rose to dominance in place of their amphibian ancestors. Various authors recognise at least three, and possibly four extinction events in the Permian. The end of the Early Permian (Cisuralian) saw a major faunal turnover, with most lineages of primitive "pelycosaur" synapsids becoming extinct, being replaced by more advanced therapsids. The end of the Capitanian Stage of the Permian was marked by the major Capitanian mass extinction event, associated with the eruption of the Emeishan Traps. The Permian (along with the Paleozoic) ended with the Permian–Triassic extinction event (colloquially known as the Great Dying), the largest mass extinction in Earth's history (which is the last of the three or four crises that occurred in the Permian), in which nearly 81% of marine species and 70% of terrestrial species died out, associated with the eruption of the Siberian Traps. It took well into the Triassic for life to recover from this catastrophe; on land, ecosystems took 30 million years to recover. Etymology and history Prior to the introduction of the term Permian, rocks of equivalent age in Germany had been named the Rotliegend and Zechstein, and in Great Britain as the New Red Sandstone. The term Permian was introduced into geology in 1841 by Sir Roderick Impey Murchison, president of the Geological Society of London, after extensive Russian explorations undertaken with Édouard de Verneuil in the vicinity of the Ural Mountains in the years 1840 and 1841. Murchison identified "vast series of beds of marl, schist, limestone, sandstone and conglomerate" that succeeded Carboniferous strata in the region. Murchison, in collaboration with Russian geologists, named the period after the surrounding Russian region of Perm, which takes its name from the medieval kingdom of Permia that occupied the same area hundreds of years prior, and which is now located in the Perm Krai administrative region. Between 1853 and 1867, Jules Marcou recognised Permian strata in a large area of North America from the Mississippi River to the Colorado River and proposed the name Dyassic, from Dyas and Trias, though Murchison rejected this in 1871. The Permian system was controversial for over a century after its original naming, with the United States Geological Survey until 1941 considering the Permian a subsystem of the Carboniferous equivalent to the Mississippian and Pennsylvanian. Geology The Permian Period is divided into three epochs, from oldest to youngest, the Cisuralian, Guadalupian, and Lopingian. Geologists divide the rocks of the Permian into a stratigraphic set of smaller units called stages, each formed during corresponding time intervals called ages. Stages can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. The ages of the Permian, from youngest to oldest, are: For most of the 20th century, the Permian was divided into the Early and Late Permian, with the Kungurian being the last stage of the Early Permian. Glenister and colleagues in 1992 proposed a tripartite scheme, advocating that the Roadian-Capitanian was distinct from the rest of the Late Permian, and should be regarded as a separate epoch. The tripartite split was adopted after a formal proposal by Glenister et al. (1999). Historically, most marine biostratigraphy of the Permian was based on ammonoids; however, ammonoid localities are rare in Permian stratigraphic sections, and species characterise relatively long periods of time. All GSSPs for the Permian are based around the first appearance datum of specific species of conodont, an enigmatic group of jawless chordates with hard tooth-like oral elements. Conodonts are used as index fossils for most of the Palaeozoic and the Triassic. Cisuralian The Cisuralian Series is named after the strata exposed on the western slopes of the Ural Mountains in Russia and Kazakhstan. The name was proposed by J. B. Waterhouse in 1982 to comprise the Asselian, Sakmarian, and Artinskian stages. The Kungurian was later added to conform to the Russian "Lower Permian". Albert Auguste Cochon de Lapparent in 1900 had proposed the "Uralian Series", but the subsequent inconsistent usage of this term meant that it was later abandoned. The Asselian was named by the Russian stratigrapher V.E. Ruzhenchev in 1954, after the Assel River in the southern Ural Mountains. The GSSP for the base of the Asselian is located in the Aidaralash River valley near Aqtöbe, Kazakhstan, which was ratified in 1996. The beginning of the stage is defined by the first appearance of Streptognathodus postfusus. The Sakmarian is named in reference to the Sakmara River in the southern Urals, and was coined by Alexander Karpinsky in 1874. The GSSP for the base of the Sakmarian is located at the Usolka section in the southern Urals, which was ratified in 2018. The GSSP is defined by the first appearance of Sweetognathus binodosus. The Artinskian was named after the city of Arti in Sverdlovsk Oblast, Russia. It was named by Karpinsky in 1874. The Artinskian currently lacks a defined GSSP. The proposed definition for the base of the Artinskian is the first appearance of Sweetognathus aff. S. whitei. The Kungurian takes its name after Kungur, a city in Perm Krai. The stage was introduced by Alexandr Antonovich Stukenberg in 1890. The Kungurian currently lacks a defined GSSP. Recent proposals have suggested the appearance of Neostreptognathodus pnevi as the lower boundary. Guadalupian The Guadalupian Series is named after the Guadalupe Mountains in Texas and New Mexico, where extensive marine sequences of this age are exposed. It was named by George Herbert Girty in 1902. The Roadian was named in 1968 in reference to the Road Canyon Member of the Word Formation in Texas. The GSSP for the base of the Roadian is located 42.7m above the base of the Cutoff Formation in Stratotype Canyon, Guadalupe Mountains, Texas, and was ratified in 2001. The beginning of the stage is defined by the first appearance of Jinogondolella nankingensis. The Wordian was named in reference to the Word Formation by Johan August Udden in 1916, Glenister and Furnish in 1961 was the first publication to use it as a chronostratigraphic term as a substage of the Guadalupian Stage. The GSSP for the base of the Wordian is located in Guadalupe Pass, Texas, within the sediments of the Getaway Limestone Member of the Cherry Canyon Formation, which was ratified in 2001. The base of the Wordian is defined by the first appearance of the conodont Jinogondolella aserrata. The Capitanian is named after the Capitan Reef in the Guadalupe Mountains of Texas, named by George Burr Richardson in 1904, and first used in a chronostratigraphic sense by Glenister and Furnish in 1961 as a substage of the Guadalupian Stage. The Capitanian was ratified as an international stage by the ICS in 2001. The GSSP for the base of the Capitanian is located at Nipple Hill in the southeast Guadalupe Mountains of Texas, and was ratified in 2001, the beginning of the stage is defined by the first appearance of Jinogondolella postserrata. Lopingian The Lopingian was first introduced by Amadeus William Grabau in 1923 as the "Loping Series" after Leping, Jiangxi, China. Originally used as a lithostraphic unit, T.K. Huang in 1932 raised the Lopingian to a series, including all Permian deposits in South China that overlie the Maokou Limestone. In 1995, a vote by the Subcommission on Permian Stratigraphy of the ICS adopted the Lopingian as an international standard chronostratigraphic unit. The Wuchiapinginan and Changhsingian were first introduced in 1962, by J. Z. Sheng as the "Wuchiaping Formation" and "Changhsing Formation" within the Lopingian series. The GSSP for the base of the Wuchiapingian is located at Penglaitan, Guangxi, China and was ratified in 2004. The boundary is defined by the first appearance of Clarkina postbitteri postbitteri The Changhsingian was originally derived from the Changxing Limestone, a geological unit first named by the Grabau in 1923, ultimately deriving from Changxing County, Zhejiang .The GSSP for the base of the Changhsingian is located 88 cm above the base of the Changxing Limestone in the Meishan D section, Zhejiang, China and was ratified in 2005, the boundary is defined by the first appearance of Clarkina wangi. The GSSP for the base of the Triassic is located at the base of Bed 27c at the Meishan D section, and was ratified in 2001. The GSSP is defined by the first appearance of the conodont Hindeodus parvus. Regional stages The Russian Tatarian Stage includes the Lopingian, Capitanian and part of the Wordian, while the underlying Kazanian includes the rest of the Wordian as well as the Roadian. In North America, the Permian is divided into the Wolfcampian (which includes the Nealian and the Lenoxian stages); the Leonardian (Hessian and Cathedralian stages); the Guadalupian; and the Ochoan, corresponding to the Lopingian. Paleogeography During the Permian, all the Earth's major landmasses were collected into a single supercontinent known as Pangaea, with the microcontinental terranes of Cathaysia to the east. Pangaea straddled the equator and extended toward the poles, with a corresponding effect on ocean currents in the single great ocean ("Panthalassa", the "universal sea"), and the Paleo-Tethys Ocean, a large ocean that existed between Asia and Gondwana. The Cimmeria continent rifted away from Gondwana and drifted north to Laurasia, causing the Paleo-Tethys Ocean to shrink. A new ocean was growing on its southern end, the Neotethys Ocean, an ocean that would dominate much of the Mesozoic Era. A magmatic arc, containing Hainan on its southwesternmost end, began to form as Panthalassa subducted under the southeastern South China. The Central Pangean Mountains, which began forming due to the collision of Laurasia and Gondwana during the Carboniferous, reached their maximum height during the early Permian around 295 million years ago, comparable to the present Himalayas, but became heavily eroded as the Permian progressed. The Kazakhstania block collided with Baltica during the Cisuralian, while the North China Craton, the South China Block and Indochina fused to each other and Pangea by the end of the Permian. The Zechstein Sea, a hypersaline epicontinental sea, existed in what is now northwestern Europe. Large continental landmass interiors experience climates with extreme variations of heat and cold ("continental climate") and monsoon conditions with highly seasonal rainfall patterns. Deserts seem to have been widespread on Pangaea. Such dry conditions favored gymnosperms, plants with seeds enclosed in a protective cover, over plants such as ferns that disperse spores in a wetter environment. The first modern trees (conifers, ginkgos and cycads) appeared in the Permian. Three general areas are especially noted for their extensive Permian deposits—the Ural Mountains (where Perm itself is located), China, and the southwest of North America, including the Texas red beds. The Permian Basin in the U.S. states of Texas and New Mexico is so named because it has one of the thickest deposits of Permian rocks in the world. Paleoceanography Sea levels dropped slightly during the earliest Permian (Asselian). The sea level was stable at several tens of metres above present during the Early Permian, but there was a sharp drop beginning during the Roadian, culminating in the lowest sea level of the entire Palaeozoic at around present sea level during the Wuchiapingian, followed by a slight rise during the Changhsingian. Climate The Permian was cool in comparison to most other geologic time periods, with modest pole to Equator temperature gradients. At the start of the Permian, the Earth was still in the Late Paleozoic icehouse (LPIA), which began in the latest Devonian and spanned the entire Carboniferous period, with its most intense phase occurring during the latter part of the Pennsylvanian epoch. A significant trend of increasing aridification can be observed over the course of the Cisuralian. Early Permian aridification was most notable in Pangaean localities at near-equatorial latitudes. Sea levels also rose notably in the Early Permian as the LPIA slowly waned. At the Carboniferous-Permian boundary, a warming event occurred. In addition to becoming warmer, the climate became notably more arid at the end of the Carboniferous and beginning of the Permian. Nonetheless, temperatures continued to cool during most of the Asselian and Sakmarian, during which the LPIA peaked. By 287 million years ago, temperatures warmed and the South Pole ice cap retreated in what was known as the Artinskian Warming Event (AWE), though glaciers remained present in the uplands of eastern Australia, and perhaps also the mountainous regions of far northern Siberia. Southern Africa also retained glaciers during the late Cisuralian in upland environments. The AWE also witnessed aridification of a particularly great magnitude. In the late Kungurian, cooling resumed, resulting in a cool glacial interval that lasted into the early Capitanian, though average temperatures were still much higher than during the beginning of the Cisuralian. Another cool period began around the middle Capitanian. This cool period, lasting for 3–4 Myr, was known as the Kamura Event. It was interrupted by the Emeishan Thermal Excursion in the late part of the Capitanian, around 260 million years ago, corresponding to the eruption of the Emeishan Traps. This interval of rapid climate change was responsible for the Capitanian mass extinction event. During the early Wuchiapingian, following the emplacement of the Emeishan Traps, global temperatures declined as carbon dioxide was weathered out of the atmosphere by the large igneous province's emplaced basalts. The late Wuchiapingian saw the finale of the Late Palaeozoic Ice Age, when the last Australian glaciers melted. The end of the Permian is marked by a temperature excursion, much larger than the Emeishan Thermal Excursion, at the Permian-Triassic boundary, corresponding to the eruption of the Siberian Traps, which released more than 5 teratonnes of CO2, more than doubling the atmospheric carbon dioxide concentration. A -2% δ18O excursion signifies the extreme magnitude of this climatic shift. This extremely rapid interval of greenhouse gas release caused the Permian-Triassic mass extinction, as well as ushering in an extreme hothouse that persisted for several million years into the next geologic epoch, the Triassic. The Permian climate was also extremely seasonal and characterised by megamonsoons, which produced high aridity and extreme seasonality in Pangaea's interiors. Precipitation along the western margins of the Palaeo-Tethys Ocean was very high. Evidence for the megamonsoon includes the presence of megamonsoonal rainforests in the Qiangtang Basin of Tibet, enormous seasonal variation in sedimentation, bioturbation, and ichnofossil deposition recorded in sedimentary facies in the Sydney Basin, and palaeoclimatic models of the Earth's climate based on the behaviour of modern weather patterns showing that such a megamonsoon would occur given the continental arrangement of the Permian. The aforementioned increasing equatorial aridity was likely driven by the development and intensification of this Pangaean megamonsoon. Life Marine biota Permian marine deposits are rich in fossil mollusks, brachiopods, and echinoderms. Brachiopods were highly diverse during the Permian. The extinct order Productida was the predominant group of Permian brachiopods, accounting for up to about half of all Permian brachiopod genera. Brachiopods also served as important ecosystem engineers in Permian reef complexes. Amongst ammonoids, Goniatitida were a major group during the Early-Mid Permian, but declined during the Late Permian. Members of the order Prolecanitida were less diverse. The Ceratitida originated from the family Daraelitidae within Prolecanitida during the mid-Permian, and extensively diversified during the Late Permian. Only three families of trilobite are known from the Permian, Proetidae, Brachymetopidae and Phillipsiidae. Diversity, origination and extinction rates during the Early Permian were low. Trilobites underwent a diversification during the Kungurian-Wordian, the last in their evolutionary history, before declining during the Late Permian. By the Changhsingian, only a handful (4–6) genera remained. Corals exhibited a decline in diversity over the course of the Middle and Late Permian. Terrestrial biota Terrestrial life in the Permian included diverse plants, fungi, arthropods, and various types of tetrapods. The period saw a massive desert covering the interior of Pangaea. The warm zone spread in the northern hemisphere, where extensive dry desert appeared. The rocks formed at that time were stained red by iron oxides, the result of intense heating by the sun of a surface devoid of vegetation cover. A number of older types of plants and animals died out or became marginal elements. The Permian began with the Carboniferous flora still flourishing. About the middle of the Permian a major transition in vegetation began. The swamp-loving lycopod trees of the Carboniferous, such as Lepidodendron and Sigillaria, were progressively replaced in the continental interior by the more advanced seed ferns and early conifers as a result of the Carboniferous rainforest collapse. At the close of the Permian, lycopod and equisete swamps reminiscent of Carboniferous flora survived only in Cathaysia, a series of equatorial islands in the Paleo-Tethys Ocean that later would become South China. The Permian saw the radiation of many important conifer groups, including the ancestors of many present-day families. Rich forests were present in many areas, with a diverse mix of plant groups. The southern continent saw extensive seed fern forests of the Glossopteris flora. Oxygen levels were probably high there. The ginkgos and cycads also appeared during this period. Insects Insects, which had first appeared and become abundant during the preceding Carboniferous, experienced a dramatic increase in diversification during the Early Permian. Towards the end of the Permian, there was a substantial drop in both origination and extinction rates. The dominant insects during the Permian Period were early representatives of Paleoptera, Polyneoptera, and Paraneoptera. Palaeodictyopteroidea, which had represented the dominant group of insects during the Carboniferous, declined during the Permian. This is likely due to competition by Hemiptera, due to their similar mouthparts and therefore ecology. Primitive relatives of damselflies and dragonflies (Meganisoptera), which include the largest flying insects of all time, also declined during the Permian. Holometabola, the largest group of modern insects, also diversified during this time. "Grylloblattidans", an extinct group of winged insects thought to be related to modern ice crawlers, reached their apex of diversity during the Permian, representing up to a third of all insects at some localities. Mecoptera (sometimes known as scorpionflies) first appeared during the Early Permian, going on to become diverse during the Late Permian. Some Permian mecopterans, like Mesopsychidae have long proboscis that suggest they may have pollinated gymnosperms. The earliest known beetles appeared at the beginning of the Permian. Early beetles such as members of Permocupedidae were likely xylophagous, feeding on decaying wood. Several lineages such as Schizophoridae expanded into aquatic habitats by the Late Permian. Members of the modern orders Archostemata and Adephaga are known from the Late Permian. Complex wood boring traces found in the Late Permian of China suggest that members of Polyphaga, the most diverse group of modern beetles, were also present by the Late Permian. Tetrapods The terrestrial fossil record of the Permian is patchy and temporally discontinuous. Early Permian records are dominated by equatorial Europe and North America, while those of the Middle and Late Permian are dominated by temperate Karoo Supergroup sediments of South Africa and the Ural region of European Russia. Early Permian terrestrial faunas of North America and Europe were dominated by primitive pelycosaur synapsids including the herbivorous edaphosaurids, and carnivorous sphenacodontids, diadectids and amphibians. Early Permian reptiles, such as acleistorhinids, were mostly small insectivores. Amniotes Synapsids (the group that would later include mammals) thrived and diversified greatly during the Cisuralian. Permian synapsids included some large members such as Dimetrodon. The special adaptations of synapsids enabled them to flourish in the drier climate of the Permian and they grew to dominate the vertebrates. A faunal turnover occurred around the transition between the Cisuralian and Guadalupian, with the decline of amphibians and the replacement of pelycosaurs (a paraphyletic group) with more advanced therapsids, although the decline of early synapsid clades was apparently a slow event that lasted about 20 Ma, from the Sakmarian to the end of the Kungurian. Predator-prey interactions among terrestrial synapsids became more dynamic. If terrestrial deposition ended around the end of the Cisuralian in North America and began in Russia during the early Guadalupian, a continuous record of the transition is not preserved. Uncertain dating has led to suggestions that there is a global hiatus in the terrestrial fossil record during the late Kungurian and early Roadian, referred to as "Olson's Gap" that obscures the nature of the transition. Other proposals have suggested that the North American and Russian records overlap, with the latest terrestrial North American deposition occurring during the Roadian, suggesting that there was an extinction event, dubbed "Olson's Extinction". The Middle Permian faunas of South Africa and Russia are dominated by therapsids, most abundantly by the diverse Dinocephalia. Dinocephalians become extinct at the end of the Middle Permian, during the Capitanian mass extinction event. Late Permian faunas are dominated by advanced therapsids such as the predatory sabertoothed gorgonopsians and herbivorous beaked dicynodonts, alongside large herbivorous pareiasaur parareptiles. The Archosauromorpha, the group of reptiles that would give rise to the pseudosuchians, dinosaurs, and pterosaurs in the following Triassic, first appeared and diversified during the Late Permian, including the first appearance of the Archosauriformes during the latest Permian. Cynodonts, the group of therapsids ancestral to modern mammals, first appeared and gained a worldwide distribution during the Late Permian. Another group of therapsids, the therocephalians (such as Lycosuchus), arose in the Middle Permian. There were no flying vertebrates, though the extinct lizard-like reptile family Weigeltisauridae from the Late Permian had extendable wings like modern gliding lizards, and are the oldest known gliding vertebrates. Amphibians Permian stem-amniotes consisted of lepospondyli and batrachosaurs, according to some phylogenies; according to others, stem-amniotes are represented only by diadectomorphs. Temnospondyls reached a peak of diversity in the Cisuralian, with a substantial decline during the Guadalupian-Lopingian following Olson's extinction, with the family diversity dropping below Carboniferous levels. Embolomeres, a group of aquatic crocodile-like limbed vertebrates that are reptilliomorphs under some phylogenies. They previously had their last records in the Cisuralian, are now known to have persisted into the Lopingian in China. Modern amphibians (lissamphibians) are suggested to have originated during Permian, descending from a lineage of dissorophoid temnospondyls or lepospondyls. Fish The diversity of fish during the Permian is relatively low compared to the following Triassic. The dominant group of bony fishes during the Permian were the "Paleopterygii" a paraphyletic grouping of Actinopterygii that lie outside of Neopterygii. The earliest unequivocal members of Neopterygii appear during the Early Triassic, but a Permian origin is suspected. The diversity of coelacanths is relatively low throughout the Permian in comparison to other marine fishes, though there is an increase in diversity during the terminal Permian (Changhsingian), corresponding with the highest diversity in their evolutionary history during the Early Triassic. Diversity of freshwater fish faunas was generally low and dominated by lungfish and "Paleopterygians". The last common ancestor of all living lungfish is thought to have existed during the Early Permian. Though the fossil record is fragmentary, lungfish appear to have undergone an evolutionary diversification and size increase in freshwater habitats during the Early Permian, but subsequently declined during the middle and late Permian. Conodonts experienced their lowest diversity of their entire evolutionary history during the Permian. Permian chondrichthyan faunas are poorly known. Members of the chondrichthyan clade Holocephali, which contains living chimaeras, reached their apex of diversity during the Carboniferous-Permian, the most famous Permian representative being the "buzz-saw shark" Helicoprion, known for its unusual spiral shaped spiral tooth whorl in the lower jaw. Hybodonts, a group of shark-like chondrichthyans, were widespread and abundant members of marine and freshwater faunas throughout the Permian. Xenacanthiformes, another extinct group of shark-like chondrichthyans, were common in freshwater habitats, and represented the apex predators of freshwater ecosystems. Flora Four floristic provinces in the Permian are recognised, the Angaran, Euramerican, Gondwanan, and Cathaysian realms. The Carboniferous Rainforest Collapse would result in the replacement of lycopsid-dominated forests with tree-fern dominated ones during the late Carboniferous in Euramerica, and result in the differentiation of the Cathaysian floras from those of Euramerica. The Gondwanan floristic region was dominated by Glossopteridales, a group of woody gymnosperm plants, for most of the Permian, extending to high southern latitudes. The ecology of the most prominent glossopterid, Glossopteris, has been compared to that of bald cypress, living in mires with waterlogged soils. The tree-like calamites, distant relatives of modern horsetails, lived in coal swamps and grew in bamboo-like vertical thickets. A mostly complete specimen of Arthropitys from the Early Permian Chemnitz petrified forest of Germany demonstrates that they had complex branching patterns similar to modern angiosperm trees. By the Late Permian, high thin forests had become widespread across the globe, as evidenced by the global distribution of weigeltisaurids. The oldest likely record of Ginkgoales (the group containing Ginkgo and its close relatives) is Trichopitys heteromorpha from the earliest Permian of France. The oldest known fossils definitively assignable to modern cycads are known from the Late Permian. In Cathaysia, where a wet tropical frost-free climate prevailed, the Noeggerathiales, an extinct group of tree fern-like progymnosperms were a common component of the flora The earliest Permian (~ 298 million years ago) Cathyasian Wuda Tuff flora, representing a coal swamp community, has an upper canopy consisting of lycopsid tree Sigillaria, with a lower canopy consisting of Marattialean tree ferns, and Noeggerathiales. Early conifers appeared in the Late Carboniferous, represented by primitive walchian conifers, but were replaced with more derived voltzialeans during the Permian. Permian conifers were very similar morphologically to their modern counterparts, and were adapted to stressed dry or seasonally dry climatic conditions. The increasing aridity, especially at low latitudes, facilitated the spread of conifers and their increasing prevalence throughout terrestrial ecosystems. Bennettitales, which would go on to become in widespread the Mesozoic, first appeared during the Cisuralian in China. Lyginopterids, which had declined in the late Pennsylvanian and subsequently have a patchy fossil record, survived into the Late Permian in Cathaysia and equatorial east Gondwana. Permian–Triassic extinction event The Permian ended with the most extensive extinction event recorded in paleontology: the Permian–Triassic extinction event. 90 to 95% of marine species became extinct, as well as 70% of all land organisms. It is also the only known mass extinction of insects. Recovery from the Permian–Triassic extinction event was protracted; on land, ecosystems took 30 million years to recover. Trilobites, which had thrived since Cambrian times, finally became extinct before the end of the Permian. Nautiloids, a subclass of cephalopods, surprisingly survived this occurrence. There is evidence that magma, in the form of flood basalt, poured onto the Earth's surface in what is now called the Siberian Traps, for thousands of years, contributing to the environmental stress that led to mass extinction. The reduced coastal habitat and highly increased aridity probably also contributed. Based on the amount of lava estimated to have been produced during this period, the worst-case scenario is the release of enough carbon dioxide from the eruptions to raise world temperatures five degrees Celsius. Another hypothesis involves ocean venting of hydrogen sulfide gas. Portions of the deep ocean will periodically lose all of its dissolved oxygen allowing bacteria that live without oxygen to flourish and produce hydrogen sulfide gas. If enough hydrogen sulfide accumulates in an anoxic zone, the gas can rise into the atmosphere. Oxidizing gases in the atmosphere would destroy the toxic gas, but the hydrogen sulfide would soon consume all of the atmospheric gas available. Hydrogen sulfide levels might have increased dramatically over a few hundred years. Models of such an event indicate that the gas would destroy ozone in the upper atmosphere allowing ultraviolet radiation to kill off species that had survived the toxic gas. There are species that can metabolize hydrogen sulfide. Another hypothesis builds on the flood basalt eruption theory. An increase in temperature of five degrees Celsius would not be enough to explain the death of 95% of life. But such warming could slowly raise ocean temperatures until frozen methane reservoirs below the ocean floor near coastlines melted, expelling enough methane (among the most potent greenhouse gases) into the atmosphere to raise world temperatures an additional five degrees Celsius. The frozen methane hypothesis helps explain the increase in carbon-12 levels found midway in the Permian–Triassic boundary layer. It also helps explain why the first phase of the layer's extinctions was land-based, the second was marine-based (and starting right after the increase in C-12 levels), and the third land-based again.
Physical sciences
Geological periods
null
23227
https://en.wikipedia.org/wiki/Pisces%20%28constellation%29
Pisces (constellation)
Pisces is a constellation of the zodiac. Its vast bulk – and main asterism viewed in most European cultures per Greco-Roman antiquity as a distant pair of fishes connected by one cord each that join at an apex – are in the Northern celestial hemisphere. Its old astronomical symbol is (♓︎). Its name is Latin for "fishes". It is between Aquarius, of similar size, to the southwest and Aries, which is smaller, to the east. The ecliptic and the celestial equator intersect within this constellation and in Virgo. The Sun passes directly overhead of the equator, on average, at approximately this point in the sky, at the March equinox. The right ascension/declination 00 is located within the boundaries of Pisces. Features The March equinox is currently located in Pisces, due south of Psc, and, due to precession, slowly drifting due west, just below the western fish towards Aquarius. Stars Although Pisces is a large constellation, there are only two stars brighter than magnitude 4 in Pisces. It is also the second dimmest of the zodiac constellations. Alrescha ("the cord"), otherwise Alpha Piscium (α Psc), 309.8 lightyears, class A2, magnitude 3.62, variable binary star Fumalsamakah ("mouth of the fish"), otherwise Beta Piscium (β Psc), 492 lightyears, class B6Ve, magnitude 4.48 Delta Piscium (δ Psc), 305 lightyears, class K5III, magnitude 4.44. Like other stars near the ecliptic, Delta Piscium is subject to lunar occultations. Epsilon Piscium (ε Psc), 190 lightyears, class K0III, magnitude 4.27. Has a candidate exoplanet. Revati ("rich"), otherwise Zeta Piscium (ζ Psc), 148 lightyears, class A7IV, magnitude 5.21. Quintuple star system. Alpherg ("emptying"), otherwise Eta Piscium (η Psc), 349 lightyears, class G7 IIIa, magnitude 3.62. It is a Gamma Cassiopeiae variable with a weak magnetic field. Torcular ("thread"), otherwise Omicron Piscium (ο Psc), 258 lightyears, class K0III, magnitude 4.2. It is an evolved red giant star on the horizontal branch. Omega Piscium (ω Psc), 106 lightyears, class F4IV, magnitude 4.03. It is an F-type star that is either a subgiant or on the main sequence. Gamma Piscium (γ Psc), 138 lightyears, magnitude 3.70. The star hosts an exoplanet which was discovered in 2021. It has a spectral type of G8 III. Van Maanen's Star is the closest-known solitary white dwarf to us, with a dim apparent magnitude. It is located about 2° to the south of the star Delta Piscium, with a relatively high proper motion of 2.978″ annually along a position angle of 155.538°. It is closer to the Sun than any other solitary white dwarf. It is too faint to be seen with the naked eye. Like other white dwarfs, it is a very dense star: its mass has been estimated to be about 67% of the Sun's, yet it has only 1% of the Sun's radius. The outer atmosphere has a temperature of approximately 6,110 K, which is relatively cool for a white dwarf. As all white dwarfs steadily radiate away their heat over time, this temperature can be used to estimate its age, thought to be around 3 billion years. It was originally thought to be an F-type star before the properties of white dwarfs were known. Due to the dimness of these stars, the constellation is essentially invisible in or near any major city due to light pollution. Deep-sky objects M74 is a loosely wound (type Sc) spiral galaxy in Pisces, found at a distance of 30 million light years (redshift 0.0022). It has many clusters of young stars and the associated nebulae, showing extensive regions of star formation. It was discovered by Pierre Méchain, a French astronomer, in 1780. A type II-P supernova was discovered in the outer regions of M74 by Robert Evans in June 2003; the star that underwent the supernova was later identified as a red supergiant with a mass of 8 solar masses. It is the brightest member of the M74 Group. NGC 488 is an isolated face-on prototypical spiral galaxy. Two supernovae have been observed in the galaxy. NGC 520 is a pair of colliding galaxies located 105 million light-years away. CL 0024+1654 is a massive galaxy cluster that lenses the galaxy behind it, creating arc-shaped images of the background galaxy. The cluster is primarily made up of yellow elliptical and spiral galaxies, at a distance of 3.6 billion light-years from Earth (redshift 0.4), half as far away as the background galaxy, which is at a distance of 5.7 billion light-years (redshift 1.67). History and mythology Pisces originates from some composition of the Babylonian constellations Šinunutu4 "the great swallow" in current western Pisces, and Anunitum the "Lady of the Heaven", at the place of the northern fish. In the first-millennium BC texts known as the Astronomical Diaries, part of the constellation was also called DU.NU.NU (Rikis-nu.mi, "the fish cord or ribbon"). Greco-Roman period Pisces is associated with the Greek legend that Aphrodite and her son Eros either shape-shifted into forms of fishes to escape, or were rescued by two fishes. In the Greek version according to Hyginus, Aphrodite and Eros while visiting Syria fled from the monster Typhon by leaping into the Euphrates River and transforming into fishes (Poeticon astronomicon 2.30, citing Diognetus Erythraeus). The Roman variant of the story has Venus and Cupid (counterparts for Aphrodite and Eros) carried away from this danger on the backs of two fishes (Ovid Fasti 2.457ff). There is also a different origin tale that Hyginus preserved in another work. According to this, an egg rolled into the Euphrates, and some fishes nudged this to shore, after which the doves sat on the egg until Aphrodite (thereafter called the Syrian Goddess) hatched out of it. The fishes were then rewarded by being placed in the skies as a constellation (Fabulae 197). This story is also recorded by the Third Vatican Mythographer. Modern period In 1690, the astronomer Johannes Hevelius in his Firmamentum Sobiescianum regarded the constellation Pisces as being composed of four subdivisions: Piscis Boreus (the North Fish): σ – 68 – 65 – 67 – ψ1 – ψ2 – ψ3 – χ – φ – υ – 91 – τ – 82 – 78 Psc. Linum Boreum (the North Cord): χ – ρ,94 – VX(97) – η – π – ο – α Psc. Linum Austrinum (the South Cord): α – ξ – ν – μ – ζ – ε – δ – 41 – 35 – ω Psc. Piscis Austrinus (the South Fish): ω – ι – θ – 7 – β – 5 – κ,9 – λ – TX(19) Psc. "Piscis Austrinus" more often refers to a separate constellation in its own right. In 1754, the botanist and author John Hill proposed to sever a southern zone of Pisces as Testudo (the Turtle). 24 – 27 – YY(30) – 33 – 29 Psc., It would host a natural but quite faint asterism in which the star 20 Psc is the head of the turtle. While Admiral Smyth mentioned the proposal, it was largely neglected by other astronomers, and it is now obsolete. Western folklore The Fishes are in the German lore of Antenteh, who owned just a tub and a crude cabin when he met two magical fish. They offered him a wish, which he refused. However, his wife begged him to return to the fish and ask for a beautifully furnished home. This wish was granted, but her desires were not satisfied. She then asked to be a queen and have a palace, but when she asked to become a goddess, the fish became angry and took the palace and home, leaving the couple with the tub and cabin once again. The tub is sometimes recognized as the Great Square of Pegasus. In non-Western astronomy The stars of Pisces were incorporated into several constellations in Chinese astronomy. Wai-ping ("Outer Enclosure") was a fence that kept a pig farmer from falling into the marshes and kept the pigs where they belonged. It was represented by Alpha, Delta, Epsilon, Zeta, Mu, Nu, and Xi Piscium. The marshes were represented by the four stars designated Phi Ceti. The northern fish of Pisces was a part of the House of the Sandal, Koui-siou.
Physical sciences
Zodiac
Astronomy
23230
https://en.wikipedia.org/wiki/Polaris
Polaris
Polaris is a star in the northern circumpolar constellation of Ursa Minor. It is designated α Ursae Minoris (Latinized to Alpha Ursae Minoris) and is commonly called the North Star or Pole Star. With an apparent magnitude that fluctuates around 1.98, it is the brightest star in the constellation and is readily visible to the naked eye at night. The position of the star lies less than 1° away from the north celestial pole, making it the current northern pole star. The stable position of the star in the Northern Sky makes it useful for navigation. As the closest Cepheid variable its distance is used as part of the cosmic distance ladder. The revised Hipparcos stellar parallax gives a distance to Polaris of about , while the successor mission Gaia gives a distance of about . Calculations by other methods vary widely. Although appearing to the naked eye as a single point of light, Polaris is a triple star system, composed of the primary, a yellow supergiant designated Polaris Aa, in orbit with a smaller companion, Polaris Ab; the pair is in a wider orbit with Polaris B. The outer pair AB were discovered in August 1779 by William Herschel, where the 'A' refers to what is now known to be the Aa/Ab pair. Stellar system Polaris Aa is an evolved yellow supergiant of spectral type F7Ib with 5.4 solar masses (). It is the first classical Cepheid to have a mass determined from its orbit. The two smaller companions are Polaris B, a F3 main-sequence star orbiting at a distance of (AU), and Polaris Ab (or P), a very close F6 main-sequence star with a mass of . Polaris B can be resolved with a modest telescope. William Herschel discovered the star in August 1779 using a reflecting telescope of his own, one of the best telescopes of the time. In January 2006, NASA released images, from the Hubble telescope, that showed the three members of the Polaris ternary system. The variable radial velocity of Polaris A was reported by W. W. Campbell in 1899, which suggested this star is a binary system. Since Polaris A is a known cepheid variable, J. H. Moore in 1927 demonstrated that the changes in velocity along the line of sight were due to a combination of the four-day pulsation period combined with a much longer orbital period and a large eccentricity of around 0.6. Moore published preliminary orbital elements of the system in 1929, giving an orbital period of about 29.7 years with an eccentricity of 0.63. This period was confirmed by proper motion studies performed by B. P. Gerasimovič in 1939. As part of her doctoral thesis, in 1955 E. Roemer used radial velocity data to derive an orbital period of 30.46 y for the Polaris A system, with an eccentricity of 0.64. K. W. Kamper in 1996 produced refined elements with a period of and an eccentricity of . In 2019, a study by R. I. Anderson gave a period of with an eccentricity of . There were once thought to be two more widely separated components—Polaris C and Polaris D—but these have been shown not to be physically associated with the Polaris system. Observation Variability Polaris Aa, the supergiant primary component, is a low-amplitude Population I classical Cepheid variable, although it was once thought to be a type II Cepheid due to its high galactic latitude. Cepheids constitute an important standard candle for determining distance, so Polaris, as the closest such star, is heavily studied. The variability of Polaris had been suspected since 1852; this variation was confirmed by Ejnar Hertzsprung in 1911. The range of brightness of Polaris is given as 1.86–2.13, but the amplitude has changed since discovery. Prior to 1963, the amplitude was over 0.1 magnitude and was very gradually decreasing. After 1966, it very rapidly decreased until it was less than 0.05 magnitude; since then, it has erratically varied near that range. It has been reported that the amplitude is now increasing again, a reversal not seen in any other Cepheid. The period, roughly 4 days, has also changed over time. It has steadily increased by around 4.5 seconds per year except for a hiatus in 1963–1965. This was originally thought to be due to secular redward (a long term change in redshift that causes light to stretch into longer wavelengths, causing it to appear red) evolution across the Cepheid instability strip, but it may be due to interference between the primary and the first-overtone pulsation modes. Authors disagree on whether Polaris is a fundamental or first-overtone pulsator and on whether it is crossing the instability strip for the first time or not. The temperature of Polaris varies by only a small amount during its pulsations, but the amount of this variation is variable and unpredictable. The erratic changes of temperature and the amplitude of temperature changes during each cycle, from less than 50 K to at least 170 K, may be related to the orbit with Polaris Ab. Research reported in Science suggests that Polaris is 2.5 times brighter today than when Ptolemy observed it, changing from third to second magnitude. Astronomer Edward Guinan considers this to be a remarkable change and is on record as saying that "if they are real, these changes are 100 times larger than [those] predicted by current theories of stellar evolution". In 2024, researchers led by Nancy Evans at the Harvard & Smithsonian, have studied with more accuracy the Polaris' smaller companion orbit using the CHARA Array. During this observation campaign they have succeeded in shooting Polaris features on its surface; large bright places and dark ones have appeared in close-up images, changing over time. Further, Polaris diameter size has been re-measured to , using the Gaia distance of light-years, and its mass was determined at . Role as pole star Because Polaris lies nearly in a direct line with the Earth's rotational axis "above" the North Pole—the north celestial pole—Polaris stands almost motionless in the sky, and all the stars of the northern sky appear to rotate around it. Therefore, it makes an excellent fixed point from which to draw measurements for celestial navigation and for astrometry. The elevation of the star above the horizon gives the approximate latitude of the observer. In 2018 Polaris was 0.66° (39.6 arcminutes) away from the pole of rotation (1.4 times the Moon disc) and so revolves around the pole in a small circle 1.3° in diameter. It will be closest to the pole (about 0.45 degree, or 27 arcminutes) soon after the year 2100. Because it is so close to the celestial north pole, its right ascension is changing rapidly due to the precession of Earth's axis, going from 2.5h in AD 2000 to 6h in AD 2100. Twice in each sidereal day Polaris's azimuth is true north; the rest of the time it is displaced eastward or westward, and the bearing must be corrected using tables or a rule of thumb. The best approximation is made using the leading edge of the "Big Dipper" asterism in the constellation Ursa Major. The leading edge (defined by the stars Dubhe and Merak) is referenced to a clock face, and the true azimuth of Polaris worked out for different latitudes. The apparent motion of Polaris towards and, in the future, away from the celestial pole, is due to the precession of the equinoxes. The celestial pole will move away from α UMi after the 21st century, passing close by Gamma Cephei by about the 41st century, moving towards Deneb by about the 91st century. The celestial pole was close to Thuban around 2750 BC, and during classical antiquity it was slightly closer to Kochab (β UMi) than to Polaris, although still about from either star. It was about the same angular distance from β UMi as to α UMi by the end of late antiquity. The Greek navigator Pytheas in ca. 320 BC described the celestial pole as devoid of stars. However, as one of the brighter stars close to the celestial pole, Polaris was used for navigation at least from late antiquity, and described as ἀεί φανής (aei phanēs) "always visible" by Stobaeus (5th century), also termed Λύχνος (Lychnos) akin to a burner or lamp and would reasonably be described as stella polaris from about the High Middle Ages and onwards, both in Greek and Latin. On his first trans-Atlantic voyage in 1492, Christopher Columbus had to correct for the "circle described by the pole star about the pole". In Shakespeare's play Julius Caesar, written around 1599, Caesar describes himself as being "as constant as the northern star", though in Caesar's time there was no constant northern star. Despite its relative brightness, it is not, as is popularly believed, the brightest star in the sky. Polaris was referenced in Nathaniel Bowditch's 1802 book, American Practical Navigator, where it is listed as one of the navigational stars. Names The modern name Polaris is shortened from Neo-Latin stella polaris "polar star", coined in the Renaissance when the star had approached the celestial pole to within a few degrees. Gemma Frisius, writing in 1547, referred to it as stella illa quae polaris dicitur ("that star which is called 'polar'"), placing it 3° 8' from the celestial pole. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Polaris for the star α Ursae Minoris Aa. In antiquity, Polaris was not yet the closest naked-eye star to the celestial pole, and the entire constellation of Ursa Minor was used for navigation rather than any single star. Polaris moved close enough to the pole to be the closest naked-eye star, even though still at a distance of several degrees, in the early medieval period, and numerous names referring to this characteristic as polar star have been in use since the medieval period. In Old English, it was known as scip-steorra ("ship-star") . In the Old English rune poem, the T-rune is apparently associated with "a circumpolar constellation", or the planet Mars. In the Hindu Puranas, it became personified under the name Dhruva ("immovable, fixed"). In the later medieval period, it became associated with the Marian title of Stella Maris "Star of the Sea" (so in Bartholomaeus Anglicus, c. 1270s), due to an earlier transcription error. An older English name, attested since the 14th century, is lodestar "guiding star", cognate with the Old Norse leiðarstjarna, Middle High German leitsterne. The ancient name of the constellation Ursa Minor, Cynosura (from the Greek "the dog's tail"), became associated with the pole star in particular by the early modern period. An explicit identification of Mary as stella maris with the polar star (Stella Polaris), as well as the use of Cynosura as a name of the star, is evident in the title Cynosura seu Mariana Stella Polaris (i.e. "Cynosure, or the Marian Polar Star"), a collection of Marian poetry published by Nicolaus Lucensis (Niccolo Barsotti de Lucca) in 1655. Its name in traditional pre-Islamic Arab astronomy was al-Judayy الجدي ("the kid", in the sense of a juvenile goat ["le Chevreau"] in Description des Etoiles fixes), and that name was used in medieval Islamic astronomy as well. In those times, it was not yet as close to the north celestial pole as it is now, and used to rotate around the pole. It was invoked as a symbol of steadfastness in poetry, as "steadfast star" by Spenser. Shakespeare's sonnet 116 is an example of the symbolism of the north star as a guiding principle: "[Love] is the star to every wandering bark / Whose worth's unknown, although his height be taken." In Julius Caesar, he has Caesar explain his refusal to grant a pardon by saying, "I am as constant as the northern star/Of whose true-fixed and resting quality/There is no fellow in the firmament./The skies are painted with unnumbered sparks,/They are all fire and every one doth shine,/But there's but one in all doth hold his place;/So in the world" (III, i, 65–71). Of course, Polaris will not "constantly" remain as the north star due to precession, but this is only noticeable over centuries. In Inuit astronomy, Polaris is known as Nuutuittuq (syllabics: ). In traditional Lakota star knowledge, Polaris is named "Wičháȟpi Owáŋžila". This translates to "The Star that Sits Still". This name comes from a Lakota story in which he married Tȟapȟúŋ Šá Wíŋ, "Red Cheeked Woman". However, she fell from the heavens, and in his grief Wičháȟpi Owáŋžila stared down from "waŋkátu" (the above land) forever. The Plains Cree call the star in Nehiyawewin: acâhkos êkâ kâ-âhcît "the star that does not move" (syllabics: ). In Mi'kmawi'simk the star is named Tatapn. In the ancient Finnish worldview, the North Star has also been called taivaannapa and naulatähti ("the nailstar") because it seems to be attached to the firmament or even to act as a fastener for the sky when other stars orbit it. Since the starry sky seemed to rotate around it, the firmament is thought of as a wheel, with the star as the pivot on its axis. The names derived from it were sky pin and world pin. Distance Many recent papers calculate the distance to Polaris at about 433 light-years (133 parsecs), based on parallax measurements from the Hipparcos astrometry satellite. Older distance estimates were often slightly less, and research based on high resolution spectral analysis suggests it may be up to 110 light years closer (323 ly/99 pc). Polaris is the closest Cepheid variable to Earth so its physical parameters are of critical importance to the whole astronomical distance scale. It is also the only one with a dynamically measured mass. The Hipparcos spacecraft used stellar parallax to take measurements from 1989 and 1993 with the accuracy of 0.97 milliarcseconds (970 microarcseconds), and it obtained accurate measurements for stellar distances up to 1,000 pc away. The Hipparcos data was examined again with more advanced error correction and statistical techniques. Despite the advantages of Hipparcos astrometry, the uncertainty in its Polaris data has been pointed out and some researchers have questioned the accuracy of Hipparcos when measuring binary Cepheids like Polaris. The Hipparcos reduction specifically for Polaris has been re-examined and reaffirmed but there is still not widespread agreement about the distance. The next major step in high precision parallax measurements comes from Gaia, a space astrometry mission launched in 2013 and intended to measure stellar parallax to within 25 microarcseconds (μas). Although it was originally planned to limit Gaia's observations to stars fainter than magnitude 5.7, tests carried out during the commissioning phase indicated that Gaia could autonomously identify stars as bright as magnitude 3. When Gaia entered regular scientific operations in July 2014, it was configured to routinely process stars in the magnitude range 3 – 20. Beyond that limit, special procedures are used to download raw scanning data for the remaining 230 stars brighter than magnitude 3; methods to reduce and analyse these data are being developed; and it is expected that there will be "complete sky coverage at the bright end" with standard errors of "a few dozen μas". Gaia Data Release 2 does not include a parallax for Polaris, but a distance inferred from it is (445.5 ly) for Polaris B, somewhat further than most previous estimates and several times more accurate. This was further improved to (447.6 ly), upon publication of the Gaia Data Release 3 catalog on 13 June 2022 which superseded Gaia Data Release 2. In popular culture Polaris is depicted in the flag and coat of arms of the Canadian Inuit territory of Nunavut, the flag of the U.S. states of Alaska and Minnesota, and the flag of the U.S. city of Duluth, Minnesota. Vexillology Heraldry Gallery
Physical sciences
Notable stars
null
23231
https://en.wikipedia.org/wiki/Parabola
Parabola
In mathematics, a parabola is a plane curve which is mirror-symmetrical and is approximately U-shaped. It fits several superficially different mathematical descriptions, which can all be proved to define exactly the same curves. One description of a parabola involves a point (the focus) and a line (the directrix). The focus does not lie on the directrix. The parabola is the locus of points in that plane that are equidistant from the directrix and the focus. Another description of a parabola is as a conic section, created from the intersection of a right circular conical surface and a plane parallel to another plane that is tangential to the conical surface. The graph of a quadratic function (with ) is a parabola with its axis parallel to the -axis. Conversely, every such parabola is the graph of a quadratic function. The line perpendicular to the directrix and passing through the focus (that is, the line that splits the parabola through the middle) is called the "axis of symmetry". The point where the parabola intersects its axis of symmetry is called the "vertex" and is the point where the parabola is most sharply curved. The distance between the vertex and the focus, measured along the axis of symmetry, is the "focal length". The "latus rectum" is the chord of the parabola that is parallel to the directrix and passes through the focus. Parabolas can open up, down, left, right, or in some other arbitrary direction. Any parabola can be repositioned and rescaled to fit exactly on any other parabola—that is, all parabolas are geometrically similar. Parabolas have the property that, if they are made of material that reflects light, then light that travels parallel to the axis of symmetry of a parabola and strikes its concave side is reflected to its focus, regardless of where on the parabola the reflection occurs. Conversely, light that originates from a point source at the focus is reflected into a parallel ("collimated") beam, leaving the parabola parallel to the axis of symmetry. The same effects occur with sound and other waves. This reflective property is the basis of many practical uses of parabolas. The parabola has many important applications, from a parabolic antenna or parabolic microphone to automobile headlight reflectors and the design of ballistic missiles. It is frequently used in physics, engineering, and many other areas. History The earliest known work on conic sections was by Menaechmus in the 4th century BC. He discovered a way to solve the problem of doubling the cube using parabolas. (The solution, however, does not meet the requirements of compass-and-straightedge construction.) The area enclosed by a parabola and a line segment, the so-called "parabola segment", was computed by Archimedes by the method of exhaustion in the 3rd century BC, in his The Quadrature of the Parabola. The name "parabola" is due to Apollonius, who discovered many properties of conic sections. It means "application", referring to "application of areas" concept, that has a connection with this curve, as Apollonius had proved. The focus–directrix property of the parabola and other conic sections was mentioned in the works of Pappus. Galileo showed that the path of a projectile follows a parabola, a consequence of uniform acceleration due to gravity. The idea that a parabolic reflector could produce an image was already well known before the invention of the reflecting telescope. Designs were proposed in the early to mid-17th century by many mathematicians, including René Descartes, Marin Mersenne, and James Gregory. When Isaac Newton built the first reflecting telescope in 1668, he skipped using a parabolic mirror because of the difficulty of fabrication, opting for a spherical mirror. Parabolic mirrors are used in most modern reflecting telescopes and in satellite dishes and radar receivers. Definition as a locus of points A parabola can be defined geometrically as a set of points (locus of points) in the Euclidean plane: The midpoint of the perpendicular from the focus onto the directrix is called the vertex, and the line is the axis of symmetry of the parabola. In a Cartesian coordinate system Axis of symmetry parallel to the y axis If one introduces Cartesian coordinates, such that and the directrix has the equation , one obtains for a point from the equation . Solving for yields This parabola is U-shaped (opening to the top). The horizontal chord through the focus (see picture in opening section) is called the latus rectum; one half of it is the semi-latus rectum. The latus rectum is parallel to the directrix. The semi-latus rectum is designated by the letter . From the picture one obtains The latus rectum is defined similarly for the other two conics – the ellipse and the hyperbola. The latus rectum is the line drawn through a focus of a conic section parallel to the directrix and terminated both ways by the curve. For any case, is the radius of the osculating circle at the vertex. For a parabola, the semi-latus rectum, , is the distance of the focus from the directrix. Using the parameter , the equation of the parabola can be rewritten as More generally, if the vertex is , the focus , and the directrix , one obtains the equation Remarks: In the case of the parabola has a downward opening. The presumption that the axis is parallel to the y axis allows one to consider a parabola as the graph of a polynomial of degree 2, and conversely: the graph of an arbitrary polynomial of degree 2 is a parabola (see next section). If one exchanges and , one obtains equations of the form . These parabolas open to the left (if ) or to the right (if ). General position If the focus is , and the directrix , then one obtains the equation (the left side of the equation uses the Hesse normal form of a line to calculate the distance ). For a parametric equation of a parabola in general position see . The implicit equation of a parabola is defined by an irreducible polynomial of degree two: such that or, equivalently, such that is the square of a linear polynomial. As a graph of a function The previous section shows that any parabola with the origin as vertex and the y axis as axis of symmetry can be considered as the graph of a function For the parabolas are opening to the top, and for are opening to the bottom (see picture). From the section above one obtains: The focus is , the focal length , the semi-latus rectum is , the vertex is , the directrix has the equation , the tangent at point has the equation . For the parabola is the unit parabola with equation . Its focus is , the semi-latus rectum , and the directrix has the equation . The general function of degree 2 is Completing the square yields which is the equation of a parabola with the axis (parallel to the y axis), the focal length , the semi-latus rectum , the vertex , the focus , the directrix , the point of the parabola intersecting the y axis has coordinates , the tangent at a point on the y axis has the equation . Similarity to the unit parabola Two objects in the Euclidean plane are similar if one can be transformed to the other by a similarity, that is, an arbitrary composition of rigid motions (translations and rotations) and uniform scalings. A parabola with vertex can be transformed by the translation to one with the origin as vertex. A suitable rotation around the origin can then transform the parabola to one that has the axis as axis of symmetry. Hence the parabola can be transformed by a rigid motion to a parabola with an equation . Such a parabola can then be transformed by the uniform scaling into the unit parabola with equation . Thus, any parabola can be mapped to the unit parabola by a similarity. A synthetic approach, using similar triangles, can also be used to establish this result. The general result is that two conic sections (necessarily of the same type) are similar if and only if they have the same eccentricity. Therefore, only circles (all having eccentricity 0) share this property with parabolas (all having eccentricity 1), while general ellipses and hyperbolas do not. There are other simple affine transformations that map the parabola onto the unit parabola, such as . But this mapping is not a similarity, and only shows that all parabolas are affinely equivalent (see ). As a special conic section The pencil of conic sections with the x axis as axis of symmetry, one vertex at the origin (0, 0) and the same semi-latus rectum can be represented by the equation with the eccentricity. For the conic is a circle (osculating circle of the pencil), for an ellipse, for the parabola with equation for a hyperbola (see picture). In polar coordinates If , the parabola with equation (opening to the right) has the polar representation where . Its vertex is , and its focus is . If one shifts the origin into the focus, that is, , one obtains the equation Remark 1: Inverting this polar form shows that a parabola is the inverse of a cardioid. Remark 2: The second polar form is a special case of a pencil of conics with focus (see picture): ( is the eccentricity). Conic section and quadratic form Diagram, description, and definitions The diagram represents a cone with its axis . The point A is its apex. An inclined cross-section of the cone, shown in pink, is inclined from the axis by the same angle , as the side of the cone. According to the definition of a parabola as a conic section, the boundary of this pink cross-section EPD is a parabola. A cross-section perpendicular to the axis of the cone passes through the vertex P of the parabola. This cross-section is circular, but appears elliptical when viewed obliquely, as is shown in the diagram. Its centre is V, and is a diameter. We will call its radius . Another perpendicular to the axis, circular cross-section of the cone is farther from the apex A than the one just described. It has a chord , which joins the points where the parabola intersects the circle. Another chord is the perpendicular bisector of and is consequently a diameter of the circle. These two chords and the parabola's axis of symmetry all intersect at the point M. All the labelled points, except D and E, are coplanar. They are in the plane of symmetry of the whole figure. This includes the point F, which is not mentioned above. It is defined and discussed below, in . Let us call the length of and of , and the length of  . Derivation of quadratic equation The lengths of and are: Using the intersecting chords theorem on the chords and , we get Substituting: Rearranging: For any given cone and parabola, and are constants, but and are variables that depend on the arbitrary height at which the horizontal cross-section BECD is made. This last equation shows the relationship between these variables. They can be interpreted as Cartesian coordinates of the points D and E, in a system in the pink plane with P as its origin. Since is squared in the equation, the fact that D and E are on opposite sides of the axis is unimportant. If the horizontal cross-section moves up or down, toward or away from the apex of the cone, D and E move along the parabola, always maintaining the relationship between and shown in the equation. The parabolic curve is therefore the locus of points where the equation is satisfied, which makes it a Cartesian graph of the quadratic function in the equation. Focal length It is proved in a preceding section that if a parabola has its vertex at the origin, and if it opens in the positive direction, then its equation is , where is its focal length. Comparing this with the last equation above shows that the focal length of the parabola in the cone is . Position of the focus In the diagram above, the point V is the foot of the perpendicular from the vertex of the parabola to the axis of the cone. The point F is the foot of the perpendicular from the point V to the plane of the parabola. By symmetry, F is on the axis of symmetry of the parabola. Angle VPF is complementary to , and angle PVF is complementary to angle VPF, therefore angle PVF is . Since the length of is , the distance of F from the vertex of the parabola is . It is shown above that this distance equals the focal length of the parabola, which is the distance from the vertex to the focus. The focus and the point F are therefore equally distant from the vertex, along the same line, which implies that they are the same point. Therefore, the point F, defined above, is the focus of the parabola. This discussion started from the definition of a parabola as a conic section, but it has now led to a description as a graph of a quadratic function. This shows that these two descriptions are equivalent. They both define curves of exactly the same shape. Alternative proof with Dandelin spheres An alternative proof can be done using Dandelin spheres. It works without calculation and uses elementary geometric considerations only (see the derivation below). The intersection of an upright cone by a plane , whose inclination from vertical is the same as a generatrix (a.k.a. generator line, a line containing the apex and a point on the cone surface) of the cone, is a parabola (red curve in the diagram). This generatrix is the only generatrix of the cone that is parallel to plane . Otherwise, if there are two generatrices parallel to the intersecting plane, the intersection curve will be a hyperbola (or degenerate hyperbola, if the two generatrices are in the intersecting plane). If there is no generatrix parallel to the intersecting plane, the intersection curve will be an ellipse or a circle (or a point). Let plane be the plane that contains the vertical axis of the cone and line . The inclination of plane from vertical is the same as line means that, viewing from the side (that is, the plane is perpendicular to plane ), . In order to prove the directrix property of a parabola (see above), one uses a Dandelin sphere , which is a sphere that touches the cone along a circle and plane at point . The plane containing the circle intersects with plane at line . There is a mirror symmetry in the system consisting of plane , Dandelin sphere and the cone (the plane of symmetry is ). Since the plane containing the circle is perpendicular to plane , and , their intersection line must also be perpendicular to plane . Since line is in plane , . It turns out that is the focus of the parabola, and is the directrix of the parabola. Let be an arbitrary point of the intersection curve. The generatrix of the cone containing intersects circle at point . The line segments and are tangential to the sphere , and hence are of equal length. Generatrix intersects the circle at point . The line segments and are tangential to the sphere , and hence are of equal length. Let line be the line parallel to and passing through point . Since , and point is in plane , line must be in plane . Since , we know that as well. Let point be the foot of the perpendicular from point to line , that is, is a segment of line , and hence . From intercept theorem and we know that . Since , we know that , which means that the distance from to the focus is equal to the distance from to the directrix . Proof of the reflective property The reflective property states that if a parabola can reflect light, then light that enters it travelling parallel to the axis of symmetry is reflected toward the focus. This is derived from geometrical optics, based on the assumption that light travels in rays. Consider the parabola . Since all parabolas are similar, this simple case represents all others. Construction and definitions The point E is an arbitrary point on the parabola. The focus is F, the vertex is A (the origin), and the line is the axis of symmetry. The line is parallel to the axis of symmetry, intersects the axis at D and intersects the directrix at C. The point B is the midpoint of the line segment . Deductions The vertex A is equidistant from the focus F and from the directrix. Since C is on the directrix, the coordinates of F and C are equal in absolute value and opposite in sign. B is the midpoint of . Its coordinate is half that of D, that is, . The slope of the line is the quotient of the lengths of and , which is . But is also the slope (first derivative) of the parabola at E. Therefore, the line is the tangent to the parabola at E. The distances and are equal because E is on the parabola, F is the focus and C is on the directrix. Therefore, since B is the midpoint of , triangles △FEB and △CEB are congruent (three sides), which implies that the angles marked are congruent. (The angle above E is vertically opposite angle ∠BEC.) This means that a ray of light that enters the parabola and arrives at E travelling parallel to the axis of symmetry will be reflected by the line so it travels along the line , as shown in red in the diagram (assuming that the lines can somehow reflect light). Since is the tangent to the parabola at E, the same reflection will be done by an infinitesimal arc of the parabola at E. Therefore, light that enters the parabola and arrives at E travelling parallel to the axis of symmetry of the parabola is reflected by the parabola toward its focus. This conclusion about reflected light applies to all points on the parabola, as is shown on the left side of the diagram. This is the reflective property. Other consequences There are other theorems that can be deduced simply from the above argument. Tangent bisection property The above proof and the accompanying diagram show that the tangent bisects the angle ∠FEC. In other words, the tangent to the parabola at any point bisects the angle between the lines joining the point to the focus and perpendicularly to the directrix. Intersection of a tangent and perpendicular from focus Since triangles △FBE and △CBE are congruent, is perpendicular to the tangent . Since B is on the axis, which is the tangent to the parabola at its vertex, it follows that the point of intersection between any tangent to a parabola and the perpendicular from the focus to that tangent lies on the line that is tangential to the parabola at its vertex. See animated diagram and pedal curve. Reflection of light striking the convex side If light travels along the line , it moves parallel to the axis of symmetry and strikes the convex side of the parabola at E. It is clear from the above diagram that this light will be reflected directly away from the focus, along an extension of the segment . Alternative proofs The above proofs of the reflective and tangent bisection properties use a line of calculus. Here a geometric proof is presented. In this diagram, F is the focus of the parabola, and T and U lie on its directrix. P is an arbitrary point on the parabola. is perpendicular to the directrix, and the line bisects angle ∠FPT. Q is another point on the parabola, with perpendicular to the directrix. We know that  =  and  = . Clearly,  > , so  > . All points on the bisector are equidistant from F and T, but Q is closer to F than to T. This means that Q is to the left of , that is, on the same side of it as the focus. The same would be true if Q were located anywhere else on the parabola (except at the point P), so the entire parabola, except the point P, is on the focus side of . Therefore, is the tangent to the parabola at P. Since it bisects the angle ∠FPT, this proves the tangent bisection property. The logic of the last paragraph can be applied to modify the above proof of the reflective property. It effectively proves the line to be the tangent to the parabola at E if the angles are equal. The reflective property follows as shown previously. Pin and string construction The definition of a parabola by its focus and directrix can be used for drawing it with help of pins and strings: Choose the focus and the directrix of the parabola. Take a triangle of a set square and prepare a string with length (see diagram). Pin one end of the string at point of the triangle and the other one to the focus . Position the triangle such that the second edge of the right angle is free to slide along the directrix. Take a pen and hold the string tight to the triangle. While moving the triangle along the directrix, the pen draws an arc of a parabola, because of (see definition of a parabola). Properties related to Pascal's theorem A parabola can be considered as the affine part of a non-degenerated projective conic with a point on the line of infinity , which is the tangent at . The 5-, 4- and 3- point degenerations of Pascal's theorem are properties of a conic dealing with at least one tangent. If one considers this tangent as the line at infinity and its point of contact as the point at infinity of the y axis, one obtains three statements for a parabola. The following properties of a parabola deal only with terms connect, intersect, parallel, which are invariants of similarities. So, it is sufficient to prove any property for the unit parabola with equation . 4-points property Any parabola can be described in a suitable coordinate system by an equation . Proof: straightforward calculation for the unit parabola . Application: The 4-points property of a parabola can be used for the construction of point , while and are given. Remark: the 4-points property of a parabola is an affine version of the 5-point degeneration of Pascal's theorem. 3-points–1-tangent property Let be three points of the parabola with equation and the intersection of the secant line with the line and the intersection of the secant line with the line (see picture). Then the tangent at point is parallel to the line . (The lines and are parallel to the axis of the parabola.) Proof: can be performed for the unit parabola . A short calculation shows: line has slope which is the slope of the tangent at point . Application: The 3-points-1-tangent-property of a parabola can be used for the construction of the tangent at point , while are given. Remark: The 3-points-1-tangent-property of a parabola is an affine version of the 4-point-degeneration of Pascal's theorem. 2-points–2-tangents property Let be two points of the parabola with equation , and the intersection of the tangent at point with the line , and the intersection of the tangent at point with the line (see picture). Then the secant is parallel to the line . (The lines and are parallel to the axis of the parabola.) Proof: straight forward calculation for the unit parabola . Application: The 2-points–2-tangents property can be used for the construction of the tangent of a parabola at point , if and the tangent at are given. Remark 1: The 2-points–2-tangents property of a parabola is an affine version of the 3-point degeneration of Pascal's theorem. Remark 2: The 2-points–2-tangents property should not be confused with the following property of a parabola, which also deals with 2 points and 2 tangents, but is not related to Pascal's theorem. Axis direction The statements above presume the knowledge of the axis direction of the parabola, in order to construct the points . The following property determines the points by two given points and their tangents only, and the result is that the line is parallel to the axis of the parabola. Let be two points of the parabola , and be their tangents; be the intersection of the tangents , be the intersection of the parallel line to through with the parallel line to through (see picture). Then the line is parallel to the axis of the parabola and has the equation Proof: can be done (like the properties above) for the unit parabola . Application: This property can be used to determine the direction of the axis of a parabola, if two points and their tangents are given. An alternative way is to determine the midpoints of two parallel chords, see section on parallel chords. Remark: This property is an affine version of the theorem of two perspective triangles of a non-degenerate conic. Related: Chord has two additional properties: Its slope is the arithmetic average of the slopes of tangents and . It is parallel to the tangent at the intersection of with the parabola. Steiner generation Parabola Steiner established the following procedure for the construction of a non-degenerate conic (see Steiner conic): This procedure can be used for a simple construction of points on the parabola : Consider the pencil at the vertex and the set of lines that are parallel to the y axis. Let be a point on the parabola, and , . The line segment is divided into n equally spaced segments, and this division is projected (in the direction ) onto the line segment (see figure). This projection gives rise to a projective mapping from pencil onto the pencil . The intersection of the line and the i-th parallel to the y axis is a point on the parabola. Proof: straightforward calculation. Remark: Steiner's generation is also available for ellipses and hyperbolas. Dual parabola A dual parabola consists of the set of tangents of an ordinary parabola. The Steiner generation of a conic can be applied to the generation of a dual conic by changing the meanings of points and lines: In order to generate elements of a dual parabola, one starts with three points not on a line, divides the line sections and each into equally spaced line segments and adds numbers as shown in the picture. Then the lines are tangents of a parabola, hence elements of a dual parabola. The parabola is a Bézier curve of degree 2 with the control points . The proof is a consequence of the de Casteljau algorithm for a Bézier curve of degree 2. Inscribed angles and the 3-point form A parabola with equation is uniquely determined by three points with different x coordinates. The usual procedure to determine the coefficients is to insert the point coordinates into the equation. The result is a linear system of three equations, which can be solved by Gaussian elimination or Cramer's rule, for example. An alternative way uses the inscribed angle theorem for parabolas. In the following, the angle of two lines will be measured by the difference of the slopes of the line with respect to the directrix of the parabola. That is, for a parabola of equation the angle between two lines of equations is measured by Analogous to the inscribed angle theorem for circles, one has the inscribed angle theorem for parabolas: (Proof: straightforward calculation: If the points are on a parabola, one may translate the coordinates for having the equation , then one has if the points are on the parabola.) A consequence is that the equation (in ) of the parabola determined by 3 points with different coordinates is (if two coordinates are equal, there is no parabola with directrix parallel to the axis, which passes through the points) Multiplying by the denominators that depend on one obtains the more standard form Pole–polar relation In a suitable coordinate system any parabola can be described by an equation . The equation of the tangent at a point is One obtains the function on the set of points of the parabola onto the set of tangents. Obviously, this function can be extended onto the set of all points of to a bijection between the points of and the lines with equations . The inverse mapping is This relation is called the pole–polar relation of the parabola, where the point is the pole, and the corresponding line its polar. By calculation, one checks the following properties of the pole–polar relation of the parabola: For a point (pole) on the parabola, the polar is the tangent at this point (see picture: ). For a pole outside the parabola the intersection points of its polar with the parabola are the touching points of the two tangents passing (see picture: ). For a point within the parabola the polar has no point with the parabola in common (see picture: and ). The intersection point of two polar lines (for example, ) is the pole of the connecting line of their poles (in example: ). Focus and directrix of the parabola are a pole–polar pair. Remark: Pole–polar relations also exist for ellipses and hyperbolas. Tangent properties Two tangent properties related to the latus rectum Let the line of symmetry intersect the parabola at point Q, and denote the focus as point F and its distance from point Q as . Let the perpendicular to the line of symmetry, through the focus, intersect the parabola at a point T. Then (1) the distance from F to T is , and (2) a tangent to the parabola at point T intersects the line of symmetry at a 45° angle. Orthoptic property If two tangents to a parabola are perpendicular to each other, then they intersect on the directrix. Conversely, two tangents that intersect on the directrix are perpendicular. In other words, at any point on the directrix the whole parabola subtends a right angle. Lambert's theorem Let three tangents to a parabola form a triangle. Then Lambert's theorem states that the focus of the parabola lies on the circumcircle of the triangle. Tsukerman's converse to Lambert's theorem states that, given three lines that bound a triangle, if two of the lines are tangent to a parabola whose focus lies on the circumcircle of the triangle, then the third line is also tangent to the parabola. Facts related to chords and arcs Focal length calculated from parameters of a chord Suppose a chord crosses a parabola perpendicular to its axis of symmetry. Let the length of the chord between the points where it intersects the parabola be and the distance from the vertex of the parabola to the chord, measured along the axis of symmetry, be . The focal length, , of the parabola is given by Area enclosed between a parabola and a chord The area enclosed between a parabola and a chord (see diagram) is two-thirds of the area of a parallelogram that surrounds it. One side of the parallelogram is the chord, and the opposite side is a tangent to the parabola. The slope of the other parallel sides is irrelevant to the area. Often, as here, they are drawn parallel with the parabola's axis of symmetry, but this is arbitrary. A theorem equivalent to this one, but different in details, was derived by Archimedes in the 3rd century BCE. He used the areas of triangles, rather than that of the parallelogram. See The Quadrature of the Parabola. If the chord has length and is perpendicular to the parabola's axis of symmetry, and if the perpendicular distance from the parabola's vertex to the chord is , the parallelogram is a rectangle, with sides of and . The area of the parabolic segment enclosed by the parabola and the chord is therefore This formula can be compared with the area of a triangle: . In general, the enclosed area can be calculated as follows. First, locate the point on the parabola where its slope equals that of the chord. This can be done with calculus, or by using a line that is parallel to the axis of symmetry of the parabola and passes through the midpoint of the chord. The required point is where this line intersects the parabola. Then, using the formula given in Distance from a point to a line, calculate the perpendicular distance from this point to the chord. Multiply this by the length of the chord to get the area of the parallelogram, then by 2/3 to get the required enclosed area. Corollary concerning midpoints and endpoints of chords A corollary of the above discussion is that if a parabola has several parallel chords, their midpoints all lie on a line parallel to the axis of symmetry. If tangents to the parabola are drawn through the endpoints of any of these chords, the two tangents intersect on this same line parallel to the axis of symmetry (see Axis-direction of a parabola). Arc length If a point X is located on a parabola with focal length , and if is the perpendicular distance from X to the axis of symmetry of the parabola, then the lengths of arcs of the parabola that terminate at X can be calculated from and as follows, assuming they are all expressed in the same units. This quantity is the length of the arc between X and the vertex of the parabola. The length of the arc between X and the symmetrically opposite point on the other side of the parabola is . The perpendicular distance can be given a positive or negative sign to indicate on which side of the axis of symmetry X is situated. Reversing the sign of reverses the signs of and without changing their absolute values. If these quantities are signed, the length of the arc between any two points on the parabola is always shown by the difference between their values of . The calculation can be simplified by using the properties of logarithms: This can be useful, for example, in calculating the size of the material needed to make a parabolic reflector or parabolic trough. This calculation can be used for a parabola in any orientation. It is not restricted to the situation where the axis of symmetry is parallel to the y axis. A geometrical construction to find a sector area S is the focus, and V is the principal vertex of the parabola VG. Draw VX perpendicular to SV. Take any point B on VG and drop a perpendicular BQ from B to VX. Draw perpendicular ST intersecting BQ, extended if necessary, at T. At B draw the perpendicular BJ, intersecting VX at J. For the parabola, the segment VBV, the area enclosed by the chord VB and the arc VB, is equal to ∆VBQ / 3, also . The area of the parabolic sector . Since triangles TSB and QBJ are similar, Therefore, the area of the parabolic sector and can be found from the length of VJ, as found above. A circle through S, V and B also passes through J. Conversely, if a point, B on the parabola VG is to be found so that the area of the sector SVB is equal to a specified value, determine the point J on VX and construct a circle through S, V and J. Since SJ is the diameter, the center of the circle is at its midpoint, and it lies on the perpendicular bisector of SV, a distance of one half VJ from SV. The required point B is where this circle intersects the parabola. If a body traces the path of the parabola due to an inverse square force directed towards S, the area SVB increases at a constant rate as point B moves forward. It follows that J moves at constant speed along VX as B moves along the parabola. If the speed of the body at the vertex where it is moving perpendicularly to SV is v, then the speed of J is equal to . The construction can be extended simply to include the case where neither radius coincides with the axis SV as follows. Let A be a fixed point on VG between V and B, and point H be the intersection on VX with the perpendicular to SA at A. From the above, the area of the parabolic sector . Conversely, if it is required to find the point B for a particular area SAB, find point J from HJ and point B as before. By Book 1, Proposition 16, Corollary 6 of Newton's Principia, the speed of a body moving along a parabola with a force directed towards the focus is inversely proportional to the square root of the radius. If the speed at A is v, then at the vertex V it is , and point J moves at a constant speed of . The above construction was devised by Isaac Newton and can be found in Book 1 of Philosophiæ Naturalis Principia Mathematica as Proposition 30. Focal length and radius of curvature at the vertex The focal length of a parabola is half of its radius of curvature at its vertex. Proof Consider a point on a circle of radius and with center at the point . The circle passes through the origin. If the point is near the origin, the Pythagorean theorem shows that But if is extremely close to the origin, since the axis is a tangent to the circle, is very small compared with , so is negligible compared with the other terms. Therefore, extremely close to the origin Compare this with the parabola which has its vertex at the origin, opens upward, and has focal length (see preceding sections of this article). Equations and are equivalent if . Therefore, this is the condition for the circle and parabola to coincide at and extremely close to the origin. The radius of curvature at the origin, which is the vertex of the parabola, is twice the focal length. Corollary A concave mirror that is a small segment of a sphere behaves approximately like a parabolic mirror, focusing parallel light to a point midway between the centre and the surface of the sphere. As the affine image of the unit parabola Another definition of a parabola uses affine transformations: Parametric representation An affine transformation of the Euclidean plane has the form , where is a regular matrix (determinant is not 0), and is an arbitrary vector. If are the column vectors of the matrix , the unit parabola is mapped onto the parabola where is a point of the parabola, is a tangent vector at point , is parallel to the axis of the parabola (axis of symmetry through the vertex). Vertex In general, the two vectors are not perpendicular, and is not the vertex, unless the affine transformation is a similarity. The tangent vector at the point is . At the vertex the tangent vector is orthogonal to . Hence the parameter of the vertex is the solution of the equation which is and the vertex is Focal length and focus The focal length can be determined by a suitable parameter transformation (which does not change the geometric shape of the parabola). The focal length is Hence the focus of the parabola is Implicit representation Solving the parametric representation for by Cramer's rule and using , one gets the implicit representation Parabola in space The definition of a parabola in this section gives a parametric representation of an arbitrary parabola, even in space, if one allows to be vectors in space. As quadratic Bézier curve A quadratic Bézier curve is a curve defined by three points , and , called its control points: This curve is an arc of a parabola (see ). Numerical integration In one method of numerical integration one replaces the graph of a function by arcs of parabolas and integrates the parabola arcs. A parabola is determined by three points. The formula for one arc is The method is called Simpson's rule. As plane section of quadric The following quadrics contain parabolas as plane sections: elliptical cone, parabolic cylinder, elliptical paraboloid, hyperbolic paraboloid, hyperboloid of one sheet, hyperboloid of two sheets. As trisectrix A parabola can be used as a trisectrix, that is it allows the exact trisection of an arbitrary angle with straightedge and compass. This is not in contradiction to the impossibility of an angle trisection with compass-and-straightedge constructions alone, as the use of parabolas is not allowed in the classic rules for compass-and-straightedge constructions. To trisect , place its leg on the x axis such that the vertex is in the coordinate system's origin. The coordinate system also contains the parabola . The unit circle with radius 1 around the origin intersects the angle's other leg , and from this point of intersection draw the perpendicular onto the y axis. The parallel to y axis through the midpoint of that perpendicular and the tangent on the unit circle in intersect in . The circle around with radius intersects the parabola at . The perpendicular from onto the x axis intersects the unit circle at , and is exactly one third of . The correctness of this construction can be seen by showing that the x coordinate of is . Solving the equation system given by the circle around and the parabola leads to the cubic equation . The triple-angle formula then shows that is indeed a solution of that cubic equation. This trisection goes back to René Descartes, who described it in his book (1637). Generalizations If one replaces the real numbers by an arbitrary field, many geometric properties of the parabola are still valid: A line intersects in at most two points. At any point the line is the tangent. Essentially new phenomena arise, if the field has characteristic 2 (that is, ): the tangents are all parallel. In algebraic geometry, the parabola is generalized by the rational normal curves, which have coordinates ; the standard parabola is the case , and the case is known as the twisted cubic. A further generalization is given by the Veronese variety, when there is more than one input variable. In the theory of quadratic forms, the parabola is the graph of the quadratic form (or other scalings), while the elliptic paraboloid is the graph of the positive-definite quadratic form (or scalings), and the hyperbolic paraboloid is the graph of the indefinite quadratic form . Generalizations to more variables yield further such objects. The curves for other values of are traditionally referred to as the higher parabolas and were originally treated implicitly, in the form for and both positive integers, in which form they are seen to be algebraic curves. These correspond to the explicit formula for a positive fractional power of . Negative fractional powers correspond to the implicit equation and are traditionally referred to as higher hyperbolas. Analytically, can also be raised to an irrational power (for positive values of ); the analytic properties are analogous to when is raised to rational powers, but the resulting curve is no longer algebraic and cannot be analyzed by algebraic geometry. In the physical world In nature, approximations of parabolas and paraboloids are found in many diverse situations. The best-known instance of the parabola in the history of physics is the trajectory of a particle or body in motion under the influence of a uniform gravitational field without air resistance (for instance, a ball flying through the air, neglecting air friction). The parabolic trajectory of projectiles was discovered experimentally in the early 17th century by Galileo, who performed experiments with balls rolling on inclined planes. He also later proved this mathematically in his book Dialogue Concerning Two New Sciences. For objects extended in space, such as a diver jumping from a diving board, the object itself follows a complex motion as it rotates, but the center of mass of the object nevertheless moves along a parabola. As in all cases in the physical world, the trajectory is always an approximation of a parabola. The presence of air resistance, for example, always distorts the shape, although at low speeds, the shape is a good approximation of a parabola. At higher speeds, such as in ballistics, the shape is highly distorted and does not resemble a parabola. Another hypothetical situation in which parabolas might arise, according to the theories of physics described in the 17th and 18th centuries by Sir Isaac Newton, is in two-body orbits, for example, the path of a small planetoid or other object under the influence of the gravitation of the Sun. Parabolic orbits do not occur in nature; simple orbits most commonly resemble hyperbolas or ellipses. The parabolic orbit is the degenerate intermediate case between those two types of ideal orbit. An object following a parabolic orbit would travel at the exact escape velocity of the object it orbits; objects in elliptical or hyperbolic orbits travel at less or greater than escape velocity, respectively. Long-period comets travel close to the Sun's escape velocity while they are moving through the inner Solar system, so their paths are nearly parabolic. Approximations of parabolas are also found in the shape of the main cables on a simple suspension bridge. The curve of the chains of a suspension bridge is always an intermediate curve between a parabola and a catenary, but in practice the curve is generally nearer to a parabola due to the weight of the load (i.e. the road) being much larger than the cables themselves, and in calculations the second-degree polynomial formula of a parabola is used. Under the influence of a uniform load (such as a horizontal suspended deck), the otherwise catenary-shaped cable is deformed toward a parabola (see ). Unlike an inelastic chain, a freely hanging spring of zero unstressed length takes the shape of a parabola. Suspension-bridge cables are, ideally, purely in tension, without having to carry other forces, for example, bending. Similarly, the structures of parabolic arches are purely in compression. Paraboloids arise in several physical situations as well. The best-known instance is the parabolic reflector, which is a mirror or similar reflective device that concentrates light or other forms of electromagnetic radiation to a common focal point, or conversely, collimates light from a point source at the focus into a parallel beam. The principle of the parabolic reflector may have been discovered in the 3rd century BC by the geometer Archimedes, who, according to a dubious legend, constructed parabolic mirrors to defend Syracuse against the Roman fleet, by concentrating the sun's rays to set fire to the decks of the Roman ships. The principle was applied to telescopes in the 17th century. Today, paraboloid reflectors can be commonly observed throughout much of the world in microwave and satellite-dish receiving and transmitting antennas. In parabolic microphones, a parabolic reflector is used to focus sound onto a microphone, giving it highly directional performance. Paraboloids are also observed in the surface of a liquid confined to a container and rotated around the central axis. In this case, the centrifugal force causes the liquid to climb the walls of the container, forming a parabolic surface. This is the principle behind the liquid-mirror telescope. Aircraft used to create a weightless state for purposes of experimentation, such as NASA's "Vomit Comet", follow a vertically parabolic trajectory for brief periods in order to trace the course of an object in free fall, which produces the same effect as zero gravity for most purposes. Gallery
Mathematics
Geometry
null
23234
https://en.wikipedia.org/wiki/Paleozoic
Paleozoic
The Paleozoic ( , , ; or Palaeozoic) Era is the first of three geological eras of the Phanerozoic Eon. Beginning 538.8 million years ago (Ma), it succeeds the Neoproterozoic (the last era of the Proterozoic Eon) and ends 251.9 Ma at the start of the Mesozoic Era. The Paleozoic is subdivided into six geologic periods (from oldest to youngest), Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. Some geological timescales divide the Paleozoic informally into early and late sub-eras: the Early Paleozoic consisting of the Cambrian, Ordovician and Silurian; the Late Paleozoic consisting of the Devonian, Carboniferous and Permian. The name Paleozoic was first used by Adam Sedgwick (1785–1873) in 1838 to describe the Cambrian and Ordovician periods. It was redefined by John Phillips (1800–1874) in 1840 to cover the Cambrian to Permian periods. It is derived from the Greek palaiós (παλαιός, "old") and zōḗ (ζωή, "life") meaning "ancient life". The Paleozoic was a time of dramatic geological, climatic, and evolutionary change. The Cambrian witnessed the most rapid and widespread diversification of life in Earth's history, known as the Cambrian explosion, in which most modern phyla first appeared. Arthropods, molluscs, fish, amphibians, reptiles, and synapsids all evolved during the Paleozoic. Life began in the ocean but eventually transitioned onto land, and by the late Paleozoic, great forests of primitive plants covered the continents, many of which formed the coal beds of Europe and eastern North America. Towards the end of the era, large, sophisticated synapsids and diapsids were dominant and the first modern plants (conifers) appeared. The Paleozoic Era ended with the largest extinction event of the Phanerozoic Eon, the Permian–Triassic extinction event. The effects of this catastrophe were so devastating that it took life on land 30 million years into the Mesozoic Era to recover. Recovery of life in the sea may have been much faster. Boundaries The base of the Paleozoic is one of the major divisions in geological time representing the divide between the Proterozoic and Phanerozoic eons, the Paleozoic and Neoproterozoic eras and the Ediacaran and Cambrian periods. When Adam Sedgwick named the Paleozoic in 1835, he defined the base as the first appearance of complex life in the rock record as shown by the presence of trilobite-dominated fauna. Since then evidence of complex life in older rock sequences has increased and by the second half of the 20th century, the first appearance of small shelly fauna (SSF), also known as early skeletal fossils, were considered markers for the base of the Paleozoic. However, whilst SSF are well preserved in carbonate sediments, the majority of Ediacaran to Cambrian rock sequences are composed of siliciclastic rocks where skeletal fossils are rarely preserved. This led the International Commission on Stratigraphy (ICS) to use trace fossils as an indicator of complex life. Unlike later in the fossil record, Cambrian trace fossils are preserved in a wide range of sediments and environments, which aids correlation between different sites around the world. Trace fossils reflect the complexity of the body plan of the organism that made them. Ediacaran trace fossils are simple, sub-horizontal feeding traces. As more complex organisms evolved, their more complex behaviour was reflected in greater diversity and complexity of the trace fossils they left behind. After two decades of deliberation, the ICS chose Fortune Head, Burin Peninsula, Newfoundland as the basal Cambrian Global Stratotype Section and Point (GSSP) at the base of the Treptichnus pedum assemblage of trace fossils and immediately above the last occurrence of the Ediacaran problematica fossils Harlaniella podolica and Palaeopsacichnus. The base of the Phanerozoic, Paleozoic and Cambrian is dated at 538.8+/-0.2 Ma and now lies below both the first appearance of trilobites and SSF. The boundary between the Paleozoic and Mesozoic eras and the Permian and Triassic periods is marked by the first occurrence of the conodont Hindeodus parvus. This is the first biostratigraphic event found worldwide that is associated with the beginning of the recovery following the end-Permian mass extinctions and environmental changes. In non-marine strata, the equivalent level is marked by the disappearance of the Permian Dicynodon tetrapods. This means events previously considered to mark the Permian-Triassic boundary, such as the eruption of the Siberian Traps flood basalts, the onset of greenhouse climate, ocean anoxia and acidification and the resulting mass extinction are now regarded as being of latest Permian in age. The GSSP is near Meishan, Zhejiang Province, southern China. Radiometric dating of volcanic clay layers just above and below the boundary confine its age to a narrow range of 251.902+/-0.024 Ma. Geology The beginning of the Paleozoic Era witnessed the breakup of the supercontinent of Pannotia and ended while the supercontinent Pangaea was assembling. The breakup of Pannotia began with the opening of the Iapetus Ocean and other Cambrian seas and coincided with a dramatic rise in sea level. Paleoclimatic studies and evidence of glaciers indicate that Central Africa was most likely in the polar regions during the early Paleozoic. The breakup of Pannotia was followed by the assembly of the huge continent Gondwana (). By the mid-Paleozoic, the collision of North America and Europe produced the Acadian-Caledonian uplifts, and a subducting plate uplifted eastern Australia. By the late Paleozoic, continental collisions formed the supercontinent of Pangaea and created great mountain chains, including the Appalachians, Caledonides, Ural Mountains, and mountains of Tasmania. Cambrian Period The Cambrian spanned from 539–485 million years ago and is the first period of the Paleozoic Era of the Phanerozoic. The Cambrian marked a boom in evolution in an event known as the Cambrian explosion in which the largest number of creatures evolved in any single period of the history of the Earth. Creatures like algae evolved, but the most ubiquitous of that period were the armored arthropods, like trilobites. Almost all marine phyla evolved in this period. During this time, the supercontinent Pannotia begins to break up, most of which later became the supercontinent Gondwana. Ordovician Period The Ordovician spanned from 485–444 million years ago. The Ordovician was a time in Earth's history in which many of the biological classes still prevalent today evolved, such as primitive fish, cephalopods, and coral. The most common forms of life, however, were trilobites, snails and shellfish. The first arthropods went ashore to colonize the empty continent of Gondwana. By the end of the Ordovician, Gondwana was at the south pole, early North America had collided with Europe, closing the intervening ocean. Glaciation of Africa resulted in a major drop in sea level, killing off all life that had established along coastal Gondwana. Glaciation may have caused the Ordovician–Silurian extinction events, in which 60% of marine invertebrates and 25% of families became extinct, and is considered the first Phanerozoic mass extinction event, and the second deadliest. Silurian Period The Silurian spanned from 444–419 million years ago. The Silurian saw the rejuvenation of life as the Earth recovered from the previous glaciation. This period saw the mass evolution of fish, as jawless fish became more numerous, jawed fish evolved, and the first freshwater fish evolved, though arthropods, such as sea scorpions, were still apex predators. Fully terrestrial life evolved, including early arachnids, fungi, and centipedes. The evolution of vascular plants (Cooksonia) allowed plants to gain a foothold on land. These early plants were the forerunners of all plant life on land. During this time, there were four continents: Gondwana (Africa, South America, Australia, Antarctica, Siberia), Laurentia (North America), Baltica (Northern Europe), and Avalonia (Western Europe). The recent rise in sea levels allowed many new species to thrive in water. Devonian Period The Devonian spanned from 419–359 million years ago. Also known as "The Age of the Fish", the Devonian featured a huge diversification of fish, including armored fish like Dunkleosteus and lobe-finned fish which eventually evolved into the first tetrapods. On land, plant groups diversified rapidly in an event known as the Devonian explosion when plants made lignin, leading to taller growth and vascular tissue; the first trees and seeds evolved. These new habitats led to greater arthropod diversification. The first amphibians appeared and fish occupied the top of the food chain. Earth's second Phanerozoic mass extinction event (a group of several smaller extinction events), the Late Devonian extinction, ended 70% of existing species. Carboniferous Period The Carboniferous is named after the large coal deposits laid down during the period. It spanned from 359–299 million years ago. During this time, average global temperatures were exceedingly high; the early Carboniferous averaged at about 20 degrees Celsius (but cooled to 10 °C during the Middle Carboniferous). An important evolutionary development of the time was the evolution of amniotic eggs, which allowed amphibians to move farther inland and remain the dominant vertebrates for the duration of this period. Also, the first reptiles and synapsids evolved in the swamps. Throughout the Carboniferous, there was a cooling trend, which led to the Permo-Carboniferous glaciation or the Carboniferous Rainforest Collapse. Gondwana was glaciated as much of it was situated around the south pole. Permian Period The Permian spanned from 299–252 million years ago and was the last period of the Paleozoic Era. At the beginning of this period, all continents joined together to form the supercontinent Pangaea, which was encircled by one ocean called Panthalassa. The land mass was very dry during this time, with harsh seasons, as the climate of the interior of Pangaea was not regulated by large bodies of water. Diapsids and synapsids flourished in the new dry climate. Creatures such as Dimetrodon and Edaphosaurus ruled the new continent. The first conifers evolved, and dominated the terrestrial landscape. Near the end of the Permian, however, Pangaea grew drier. The interior was desert, and new taxa such as Scutosaurus and Gorgonopsids filled it. Eventually they disappeared, along with 95% of all life on Earth, in a cataclysm known as "The Great Dying", the third and most severe Phanerozoic mass extinction. Climate The early Cambrian climate was probably moderate at first, becoming warmer over the course of the Cambrian, as the second-greatest sustained sea level rise in the Phanerozoic got underway. However, as if to offset this trend, Gondwana moved south, so that, in Ordovician time, most of West Gondwana (Africa and South America) lay directly over the South Pole. The early Paleozoic climate was strongly zonal, with the result that the "climate", in an abstract sense, became warmer, but the living space of most organisms of the time – the continental shelf marine environment – became steadily colder. However, Baltica (Northern Europe and Russia) and Laurentia (eastern North America and Greenland) remained in the tropical zone, while China and Australia lay in waters which were at least temperate. The early Paleozoic ended, rather abruptly, with the short, but apparently severe, late Ordovician ice age. This cold spell caused the second-greatest mass extinction of the Phanerozoic Eon. Over time, the warmer weather moved into the Paleozoic Era. The Ordovician and Silurian were warm greenhouse periods, with the highest sea levels of the Paleozoic (200 m above today's); the warm climate was interrupted only by a cool period, the Early Palaeozoic Icehouse, culminating in the Hirnantian glaciation, at the end of the Ordovician. The middle Paleozoic was a time of considerable stability. Sea levels had dropped coincident with the ice age, but slowly recovered over the course of the Silurian and Devonian. The slow merger of Baltica and Laurentia, and the northward movement of bits and pieces of Gondwana created numerous new regions of relatively warm, shallow sea floor. As plants took hold on the continental margins, oxygen levels increased and carbon dioxide dropped, although much less dramatically. The north–south temperature gradient also seems to have moderated, or metazoan life simply became hardier, or both. At any event, the far southern continental margins of Antarctica and West Gondwana became increasingly less barren. The Devonian ended with a series of turnover pulses which killed off much of middle Paleozoic vertebrate life, without noticeably reducing species diversity overall. There are many unanswered questions about the late Paleozoic. The Mississippian (early Carboniferous Period) began with a spike in atmospheric oxygen, while carbon dioxide plummeted to new lows. This destabilized the climate and led to one, and perhaps two, ice ages during the Carboniferous. These were far more severe than the brief Late Ordovician ice age; but, this time, the effects on world biota were inconsequential. By the Cisuralian Epoch, both oxygen and carbon dioxide had recovered to more normal levels. On the other hand, the assembly of Pangaea created huge arid inland areas subject to temperature extremes. The Lopingian Epoch is associated with falling sea levels, increased carbon dioxide and general climatic deterioration, culminating in the devastation of the Permian extinction. Flora While macroscopic plant life appeared early in the Paleozoic Era and possibly late in the Neoproterozoic Era of the earlier eon, plants mostly remained aquatic until the Silurian Period, about 420 million years ago, when they began to transition onto dry land. Terrestrial flora reached its climax in the Carboniferous, when towering lycopsid rainforests dominated the tropical belt of Euramerica. Climate change caused the Carboniferous Rainforest Collapse which fragmented this habitat, diminishing the diversity of plant life in the late Carboniferous and Permian periods. Fauna A noteworthy feature of Paleozoic life is the sudden appearance of nearly all of the invertebrate animal phyla in great abundance at the beginning of the Cambrian. The first vertebrates appeared in the form of primitive fish, which greatly diversified in the Silurian and Devonian Periods. The first animals to venture onto dry land were the arthropods. Some fish had lungs, and powerful bony fins that in the late Devonian, 367.5 million years ago, allowed them to crawl onto land. The bones in their fins eventually evolved into legs and they became the first tetrapods, , and began to develop lungs. Amphibians were the dominant tetrapods until the mid-Carboniferous, when climate change greatly reduced their diversity, allowing amniotes to take over. Amniotes would split into two clades shortly after their origin in the Carboniferous; the synapsids, which was the dominant group, and the sauropsids. The synapsids continued to prosper and increase in number and variety till the end of the Permian period. In late middle Permian the pareiasaurs originated, successful herbivores and the only sauropsids that could reach sizes comparable to some of the largest synapsids. The Palaeozoic marine fauna was notably lacking in predators relative to the present day. Predators made up about 4% of the fauna in Palaeozoic assemblages while making up 17% of temperate Cenozoic assemblages and 31% of tropical ones. Infaunal animals made up 4% of soft substrate Palaeozoic communities but about 47% of Cenozoic communities. Additionally, the Palaeozoic had very few facultatively motile animals that could easily adjust to disturbance, with such creatures composing 1% of its assemblages in contrast to 50% in Cenozoic faunal assemblages. Non-motile animals untethered to the substrate, extremely rare in the Cenozoic, were abundant in the Palaeozoic. Microbiota Palaeozoic phytoplankton overall were both nutrient-poor themselves and adapted to nutrient-poor environmental conditions. This phytoplankton nutrient poverty has been cited as an explanation for the Palaeozoic's relatively low biodiversity.
Physical sciences
Geological periods
null
23253
https://en.wikipedia.org/wiki/Parallax
Parallax
Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight and is measured by the angle or half-angle of inclination between those two lines. Due to foreshortening, nearby objects show a larger parallax than farther objects, so parallax can be used to determine distances. To measure large distances, such as the distance of a planet or a star from Earth, astronomers use the principle of parallax. Here, the term parallax is the semi-angle of inclination between two sight-lines to the star, as observed when Earth is on opposite sides of the Sun in its orbit. These distances form the lowest rung of what is called "the cosmic distance ladder", the first in a succession of methods by which astronomers determine the distances to celestial objects, serving as a basis for other distance measurements in astronomy forming the higher rungs of the ladder. Parallax also affects optical instruments such as rifle scopes, binoculars, microscopes, and twin-lens reflex cameras that view objects from slightly different angles. Many animals, along with humans, have two eyes with overlapping visual fields that use parallax to gain depth perception; this process is known as stereopsis. In computer vision the effect is used for computer stereo vision, and there is a device called a parallax rangefinder that uses it to find the range, and in some variations also altitude to a target. A simple everyday example of parallax can be seen in the dashboards of motor vehicles that use a needle-style mechanical speedometer. When viewed from directly in front, the speed may show exactly 60, but when viewed from the passenger seat, the needle may appear to show a slightly different speed due to the angle of viewing combined with the displacement of the needle from the plane of the numerical dial. Visual perception Because the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects. Animals also use motion parallax, in which the animals (or just the head) move to gain different viewpoints. For example, pigeons (whose eyes do not have overlapping fields of view and thus cannot use stereopsis) bob their heads up and down to see depth. The motion parallax is exploited also in wiggle stereoscopy, computer graphics that provide depth cues through viewpoint-shifting animation rather than through binocular vision. Distance measurement Parallax arises due to a change in viewpoint occurring due to the motion of the observer, of the observed, or both. What is essential is relative motion. By observing parallax, measuring angles, and using geometry, one can determine distance. Distance measurement by parallax is a special case of the principle of triangulation, which states that one can solve for all the sides and angles in a network of triangles if, in addition to all the angles in the network, the length of at least one side has been measured. Thus, the careful measurement of the length of one baseline can fix the scale of an entire triangulation network. In parallax, the triangle is extremely long and narrow, and by measuring both its shortest side (the motion of the observer) and the small top angle (always less than 1 arcsecond, leaving the other two close to 90  degrees), the length of the long sides (in practice considered to be equal) can be determined. In astronomy, assuming the angle is small, the distance to a star (measured in parsecs) is the reciprocal of the parallax (measured in arcseconds): For example, the distance to Proxima Centauri is 1/0.7687 = . On Earth, a coincidence rangefinder or parallax rangefinder can be used to find distance to a target. In surveying, the problem of resection explores angular measurements from a known baseline for determining an unknown point's coordinates. Astronomy Metrology Measurements made by viewing the position of some marker relative to something to be measured are subject to parallax error if the marker is some distance away from the object of measurement and not viewed from the correct position. For example, if measuring the distance between two ticks on a line with a ruler marked on its top surface, the thickness of the ruler will separate its markings from the ticks. If viewed from a position not exactly perpendicular to the ruler, the apparent position will shift and the reading will be less accurate than the ruler is capable of. A similar error occurs when reading the position of a pointer against a scale in an instrument such as an analog multimeter. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror, and the user's eye is positioned so that the pointer obscures its reflection, guaranteeing that the user's line of sight is perpendicular to the mirror and therefore to the scale. The same effect alters the speed read on a car's speedometer by a driver in front of it and a passenger off to the side, values read from a graticule, not in actual contact with the display on an oscilloscope, etc. Photogrammetry When viewed through a stereo viewer, aerial picture pair offers a pronounced stereo effect of landscape and buildings. High buildings appear to "keel over" in the direction away from the center of the photograph. Measurements of this parallax are used to deduce the height of the buildings, provided that flying height and baseline distances are known. This is a key component of the process of photogrammetry. Photography Parallax error can be seen when taking photos with many types of cameras, such as twin-lens reflex cameras and those including viewfinders (such as rangefinder cameras). In such cameras, the eye sees the subject through different optics (the viewfinder, or a second lens) than the one through which the photo is taken. As the viewfinder is often found above the lens of the camera, photos with parallax error are often slightly lower than intended, the classic example being the image of a person with their head cropped off. This problem is addressed in single-lens reflex cameras, in which the viewfinder sees through the same lens through which the photo is taken (with the aid of a movable mirror), thus avoiding parallax error. Parallax is also an issue in image stitching, such as for panoramas. Weapon sights Parallax affects sighting devices of ranged weapons in many ways. On sights fitted on small arms and bows, etc., the perpendicular distance between the sight and the weapon's launch axis (e.g. the bore axis of a gun)—generally referred to as "sight height"—can induce significant aiming errors when shooting at close range, particularly when shooting at small targets. This parallax error is compensated for (when needed) via calculations that also take in other variables such as bullet drop, windage, and the distance at which the target is expected to be. Sight height can be used to advantage when "sighting in" rifles for field use. A typical hunting rifle (.222 with telescopic sights) sighted in at 75m will still be useful from without needing further adjustment. Optical sights In some reticled optical instruments such as telescopes, microscopes or in telescopic sights ("scopes") used on small arms and theodolites, parallax can create problems when the reticle is not coincident with the focal plane of the target image. This is because when the reticle and the target are not at the same focus, the optically corresponded distances being projected through the eyepiece are also different, and the user's eye will register the difference in parallaxes between the reticle and the target (whenever eye position changes) as a relative displacement on top of each other. The term parallax shift refers to the resultant apparent "floating" movements of the reticle over the target image when the user moves his/her head/eye laterally (up/down or left/right) behind the sight, i.e. an error where the reticle does not stay aligned with the user's optical axis. Some firearm scopes are equipped with a parallax compensation mechanism, which consists of a movable optical element that enables the optical system to shift the focus of the target image at varying distances into the same optical plane of the reticle (or vice versa). Many low-tier telescopic sights may have no parallax compensation because in practice they can still perform very acceptably without eliminating parallax shift. In this case, the scope is often set fixed at a designated parallax-free distance that best suits their intended usage. Typical standard factory parallax-free distances for hunting scopes are 100  yd (or 90 m) to make them suited for hunting shots that rarely exceed 300  yd/m. Some competition and military-style scopes without parallax compensation may be adjusted to be parallax free at ranges up to 300  yd/m to make them better suited for aiming at longer ranges. Scopes for guns with shorter practical ranges, such as airguns, rimfire rifles, shotguns, and muzzleloaders, will have parallax settings for shorter distances, commonly for rimfire scopes and for shotguns and muzzleloaders. Airgun scopes are very often found with adjustable parallax, usually in the form of an adjustable objective (or "AO" for short) design, and may adjust down to as near as . Non-magnifying reflector or "reflex" sights can be theoretically "parallax free". But since these sights use parallel collimated light this is only true when the target is at infinity. At finite distances, eye movement perpendicular to the device will cause parallax movement in the reticle image in exact relationship to the eye position in the cylindrical column of light created by the collimating optics. Firearm sights, such as some red dot sights, try to correct for this via not focusing the reticle at infinity, but instead at some finite distance, a designed target range where the reticle will show very little movement due to parallax. Some manufacturers market reflector sight models they call "parallax free", but this refers to an optical system that compensates for off axis spherical aberration, an optical error induced by the spherical mirror used in the sight that can cause the reticle position to diverge off the sight's optical axis with change in eye position. Artillery-fire Because of the positioning of field or naval artillery, each gun has a slightly different perspective of the target relative to the location of the fire-control system. When aiming guns at the target, the fire control system must compensate for parallax to assure that fire from each gun converges on the target. Art Several of Mark Renn's sculptural works play with parallax, appearing abstract until viewed from a specific angle. One such sculpture is The Darwin Gate (pictured) in Shrewsbury, England, which from a certain angle appears to form a dome, according to Historic England, in "the form of a Saxon helmet with a Norman window... inspired by features of St Mary's Church which was attended by Charles Darwin as a boy". As a metaphor In a philosophic/geometric sense: an apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight. The apparent displacement, or difference of position, of an object, as seen from two different stations, or points of view. In contemporary writing, parallax can also be the same story, or a similar story from approximately the same timeline, from one book, told from a different perspective in another book. The word and concept feature prominently in James Joyce's 1922 novel, Ulysses. Orson Scott Card also used the term when referring to Ender's Shadow as compared to Ender's Game. The metaphor is invoked by Slovenian philosopher Slavoj Žižek in his 2006 book The Parallax View, borrowing the concept of "parallax view" from the Japanese philosopher and literary critic Kojin Karatani. Žižek notes
Physical sciences
Astrometry
null
23254
https://en.wikipedia.org/wiki/Paralysis
Paralysis
Paralysis (: paralyses; also known as plegia) is a loss of motor function in one or more muscles. Paralysis can also be accompanied by a loss of feeling (sensory loss) in the affected area if there is sensory damage. In the United States, roughly 1 in 50 people have been diagnosed with some form of permanent or transient paralysis. The word "paralysis" derives from the Greek παράλυσις, meaning "disabling of the nerves" from παρά (para) meaning "beside, by" and λύσις (lysis) meaning "making loose". A paralysis accompanied by involuntary tremors is usually called "palsy". Causes Paralysis is most often caused by damage in the nervous system, especially the spinal cord. Other major causes are stroke, trauma with nerve injury, poliomyelitis, cerebral palsy, peripheral neuropathy, Parkinson's disease, ALS, botulism, spina bifida, multiple sclerosis, and Guillain–Barré syndrome. Temporary paralysis occurs during REM sleep, and dysregulation of this system can lead to episodes of waking paralysis. Drugs that interfere with nerve function, such as curare, can also cause paralysis. Pseudoparalysis (pseudo- meaning "false, not genuine", from Greek ψεῦδος) is voluntary restriction or inhibition of motion because of pain, incoordination, orgasm, or other cause, and is not due to actual muscular paralysis. In an infant, it may be a symptom of congenital syphilis. Pseudoparalysis can be caused by extreme mental stresses, and is a common feature of mental disorders such as panic anxiety disorder. Variations Paralysis can occur in localised or generalised forms, or it may follow a certain pattern. Most paralyses caused by nervous-system damage (e.g., spinal cord injuries) are constant in nature; however, some forms of periodic paralysis, including sleep paralysis, are caused by other factors. Paralysis can occur in newborns due to a congenital defect known as spina bifida. Spina bifida causes one or more of the vertebrae to fail to form vertebral arches within the infant, which allows the spinal cord to protrude from the rest of the spine. In extreme cases, this can cause spinal cord function inferior to the missing vertebral arches to cease. This cessation of spinal cord function can result in paralysis of lower extremities. Documented cases of paralysis of the anal sphincter in newborns have been observed when spina bifida has gone untreated. While life-threatening, many cases of spina bifida can be corrected surgically if operated on within 72 hours of birth. Ascending paralysis presents in the lower limbs before the upper limbs. It can be associated with: Guillain–Barré syndrome (another name for this condition is Landry's ascending paralysis) Tick paralysis Ascending paralysis contrasts with descending paralysis, which occurs in conditions such as botulism. Other animals Many animal species use paralyzing toxins to capture prey, evade predation, or both. In stimulated muscles, the decrease in frequency of the miniature potentials runs parallel to the decrease in postsynaptic potential, and to the decrease in muscle contraction. In invertebrates, this clearly indicates that, e.g., Microbracon (wasp genus) venom causes paralysis of the neuromuscular system by acting at a presynaptic site. Philanthus venom inhibits both the fast and slow neuromuscular system at identical concentrations. It causes a decrease in the frequency of the miniature potentials without affecting their amplitude significantly. Invertebrates In some species of wasp, to complete the reproductive cycle, the female wasp paralyses a prey item such as a grasshopper and places it in her nest. In the species Philanthus gibbosus, the paralysed insect (most often a bee species) is coated in a thick layer of pollen. The adult P. gibbosus then lays eggs in the paralysed insect, which is devoured by the larvae when they hatch. Vertebrates A well-known example of a vertebrate-produced paralyzing toxin is the tetrodotoxin of fish species such as Takifugu rubripes, the famously lethal pufferfish of Japanese fugu. This toxin works by binding to sodium channels in nerve cells, inhibiting the cells' proper function. A non-lethal dose of this toxin results in temporary paralysis. This toxin is also present in many other species ranging from toads to nemerteans. Paralysis can be seen in breeds of dogs that are chondrodysplastic. These dogs have short legs, and may also have short muzzles. Their intervertebral disc material can calcify and become more brittle. In such cases, the disc may rupture, with disc material ending up in the spinal canal, or rupturing more laterally to press on spinal nerves. A minor rupture may only result in paresis, but a major rupture can cause enough damage to cut off circulation. If no signs of pain can be elicited, surgery should be performed within 24 hours of the incident, to remove the disc material and relieve pressure on the spinal cord. After 24 hours, the chance of recovery declines rapidly, since with continued pressure, the spinal cord tissue deteriorates and dies. Another type of paralysis is caused by a fibrocartilaginous embolism. This is a microscopic piece of disc material that breaks off and becomes lodged in a spinal artery. Nerves served by the artery will die when deprived of blood. The German Shepherd Dog is especially prone to developing degenerative myelopathy. This is a deterioration of nerves in the spinal cord, starting in the posterior part of the cord. Affected dogs will become gradually weaker in the hind legs as nerves die off. Eventually, their hind legs become useless. They often also exhibit faecal and urinary incontinence. As the disease progresses, the paresis and paralysis gradually move forward. This disease also affects other large breeds of dogs. It is suspected to be an autoimmune problem. Cats with a heart murmur may develop blood clots that travel through arteries. If a clot is large enough to block one or both femoral arteries, there may be hind leg paralysis because the major source of blood flow to the hind leg is blocked. Many snakes exhibit powerful neurotoxins that can cause non-permanent paralysis or death. Also, many trees contain neurotoxins.
Biology and health sciences
Disabilities
Health
23259
https://en.wikipedia.org/wiki/Particle%20physics
Particle physics
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation. The field also studies combinations of elementary particles up to the scale of protons and neutrons, while the study of combination of protons and neutrons is called nuclear physics. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction. Quarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators. Particles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle. These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory. Practical particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated by theoretical particle physicists and its presence confirmed by practical experiments. History The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element, but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. Bethe's 1947 calculation of the Lamb shift is credited with having "opened the way to the modern era of particle physics". Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the "particle zoo". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics. Standard Model The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, , and bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson. The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model. Subatomic particles Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model. Dynamics of particles are also governed by quantum mechanics; they exhibit wave–particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles. Quarks and leptons Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (−1/3 or 2/3) and leptons have whole-numbered electric charge (0 or -1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement. There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that a fourth generation of fermions does not exist. Bosons Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light. The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism – the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state. Antiparticles and color charge Most aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted and . However, in the case that the particle has a charge of 0 (equal to that of the antiparticle), the antiparticle is denoted with a line above the symbol. As such, an electron neutrino is , whereas its antineutrino is . When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles, such as the photon or gluon, have no antiparticles. Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)). Composite The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark). An atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon. Hypothetical The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem, axions address the strong CP problem, and various other particles are proposed to explain the origins of dark matter and dark energy. Experimental laboratories The world's major particle physics laboratories are: Brookhaven National Laboratory (Long Island, New York, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider. Budker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron–electron beam–beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000. CERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva, Switzerland). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments. DESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL. Fermi National Accelerator Laboratory (Fermilab) (Batavia, Illinois, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009. Institute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron–Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO). KEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons. SLAC National Accelerator Laboratory (Menlo Park, California, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world. Theory Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today. One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists. Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall–Sundrum models), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions. A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything", or "TOE". There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity. Practical applications In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics. Future Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments.
Physical sciences
Physics
null
23264
https://en.wikipedia.org/wiki/Plain
Plain
In geography, a plain, commonly known as flatland, is a flat expanse of land that generally does not change much in elevation, and is primarily treeless. Plains occur as lowlands along valleys or at the base of mountains, as coastal plains, and as plateaus or uplands. Plains are one of the major landforms on earth, being present on all continents and covering more than one-third of the world's land area. Plains in many areas are important for agriculture. There are various types of plains and biomes on them. Description A plain or flatland is a flat expanse of land with a layer of grass that generally does not change much in elevation, and is primarily treeless. Plains occur as lowlands along valleys or at the base of mountains, as coastal plains, and as plateaus or uplands. Plains are one of the major landforms on earth, where they are present on all continents, and cover more than one-third of the world's land area. In a valley, a plain is enclosed on two sides, but in other cases a plain may be delineated by a complete or partial ring of hills, by mountains, or by cliffs. Where a geological region contains more than one plain, they may be connected by a pass (sometimes termed a gap). Coastal plains mostly rise from sea level until they run into elevated features such as mountains or plateaus. Plains can be formed from flowing lava; from deposition of sediment by water, ice, or wind; or formed by erosion by the agents from hills or mountains. Biomes on plains include grassland (temperate or subtropical), steppe (semi-arid), savannah (tropical) or tundra (polar). In a few instances, deserts and rainforests may also be considered plains. Plains in many areas are important for agriculture because where the soils were deposited as sediments they may be deep and fertile, and the flatness facilitates mechanization of crop production; or because they support grasslands which provide good grazing for livestock. Types of plain Depositional plains The types of depositional plains include: Abyssal plains, flat or very gently sloping areas of the deep ocean basin. Planitia , the Latin word for plain, is used in the naming of plains on extraterrestrial objects (planets and moons), such as Hellas Planitia on Mars or Sedna Planitia on Venus. Alluvial plains, which are formed by rivers and which may be one of these overlapping types: Alluvial plains, formed over a long period of time by a river depositing sediment on their flood plains or beds, which become alluvial soil. The difference between a flood plain and an alluvial plain is: a flood plain represents areas experiencing flooding fairly regularly in the present or recently, whereas an alluvial plain includes areas where a flood plain is now and used to be, or areas which only experience flooding a few times a century. Flood plain, adjacent to a lake, river, stream, or wetland that experiences occasional or periodic flooding. Scroll plain, a plain through which a river meanders with a very low gradient. Glacial plains, formed by the movement of glaciers under the force of gravity: Outwash plain (also known as sandur; plural sandar), a glacial out-wash plain formed of sediments deposited by melt-water at the terminus of a glacier. Sandar consist mainly of stratified (layered and sorted) gravel and sand. Till plains, plain of glacial till that form when a sheet of ice becomes detached from the main body of a glacier and melts in place depositing the sediments it carries. Till plains are composed of unsorted material (till) of all sizes. Lacustrine plains, plains that originally formed in a lacustrine environment, that is, as the bed of a lake. Lava plains, formed by sheets of flowing lava. Erosional plains Erosional plains have been leveled by various agents of denudation such as running water, rivers, wind and glacier which wear out the rugged surface and smoothens them. Plain resulting from the action of these agents of denudation are called peneplains (almost plain) while plains formed from wind action are called pediplains. Structural plains Structural plains are relatively undisturbed horizontal surfaces of the Earth. They are structurally depressed areas of the world that make up some of the most extensive natural lowlands on the Earth's surface. Notable examples America Caribbean and South America Altiplano (Bolivia, Chile) Altiplano Cundiboyacense (Colombia) Caroni Plain (Trinidad and Tobago) Chilean Central Valley Los Llanos Gran Chaco (Argentina, Bolivia, Paraguay) Llanos (Colombia and Venezuela) Pampas (Argentina, Uruguay, Brazil) Coastal plains of Chile North America Atlantic coastal plain (United States) Carrizo Plain (California, United States) Great Plains (Canada and United States) Guatemala South Coast (Guatemala) Gulf Coastal Plain (Mexico and United States) Interior Plains (Canada and United States) Lake Superior Lowland (Wisconsin, United States) Laramie Plains (Wyoming) Mississippi Alluvial Plain (Mississippi) Oxnard Plain (Ventura County, California) Snake River Plain (Idaho) Asia Eastern Asia Chianan Plain (Taiwan) Depsang Plains (China and India) Honam Plain (South Korea) Kantō Plain (Japan) Kedu Plain (Indonesia) Kewu Plain (Indonesia) Mallig Plains (Philippines) Nōbi Plain (Japan) North China Plain (China) Osaka Plain (Japan) Pingtung Plain (Taiwan) Sarobetsu plain (Japan) Sendai Plain (Japan) Yilan Plain (Taiwan) North Asia West Siberian Plain (Russia) South Asia Bhuikhel (Nepal) Depsang Plains (India and China) Dooars (India and Bhutan) Eastern coastal plains (India) Indo-Gangetic Plains (Bangladesh, India, Nepal and Pakistan) More plains (India) North Bengal plains (Bangladesh and India) Punjab Plains (Pakistan and India) Terai (India and Nepal) Utkal Plains (India) Western coastal plains (India) Western Asia Al-Ghab Plain (Syria) Aleppo plateau (Syria) Ararat Plain (Armenia and Turkey) Israeli coastal plain (Israel) Khuzestan Plain (Iran) Mugan plain (Azerbaijan and Iran) Nineveh Plains (Iraqi Kurdistan) Shiraki Plain (Georgia) Europe Central Europe Limagne (France) North German Plain Ochsenfeld (France) Pannonian Basin (Central Europe) Parndorf Plain (Austria) Westphalian Lowland (Germany) Eastern Europe Bărăgan Plain (Romania) Danubian Plain (Bulgaria) Dnieper Lowland (Ukraine) East European Plain European Plain Great Hungarian Plain Kosovo field (Kosovo) Little Hungarian Plain (Austria, Hungary, and Slovakia) Pannonian Steppe (Hungary) Polesian Lowland (Ukraine and Belarus) Upper Thracian Plain (Bulgaria) Wallachian Plain (Romania) Northern Europe Cheshire Plain (England) Hardangervidda (Norway) Kaffiøyra (Svalbard, Norway) Muddus plains (Sweden) North European Plain North Northumberland Coastal Plain (Northern England) North Somerset Levels (North Somerset, England) Salisbury Plain (England) Solway Plain (Cumbria, England) Somerset Levels (Somerset, England) South Coast Plain (Hampshire and Sussex, England) South Småland peneplain (Sweden) Stora Alvaret (Öland, Sweden) Strandflat (Norway) Sub-Cambrian peneplain (Nordic countries) Central Swedish lowland Ostrobothnian Plain (Finland) The Fylde (Lancashire, England) Southern Europe Agro Nocerino Sarnese (Italy) Campidano (Italy) Lelantine Plain (Greece) Mesaoria (Cyprus) Messara Plain (Greece) Nurra (Sardinia, Italy) Po Valley (Italy) Rieti Valley (Central Italy) Tavoliere delle Puglie (Southern Italy) Oceania Australia Bogong High Plains (Victorian Alps, Australia) Cumberland Plain (Sydney, Australia) Esperance Plains (Western Australia) Molonglo Plain (Australian Capital Territory) Mulga Lands (eastern Australia) Nullarbor Plain (Southern Australia) Ord Victoria Plain (Northern Australia) Swan Coastal Plain (Perth, Australia) New Zealand Awarua Plains (Southland) Canterbury Plains (Canterbury) Hauraki Plains (Waikato) Maniototo (Otago) Taieri (Otago)
Physical sciences
Landforms
null
23269
https://en.wikipedia.org/wiki/Physicist
Physicist
A physicist is a scientist who specializes in the field of physics, which encompasses the interactions of matter and energy at all length and time scales in the physical universe. Physicists generally are interested in the root or ultimate causes of phenomena, and usually frame their understanding in mathematical terms. They work across a wide range of research fields, spanning all length scales: from sub-atomic and particle physics, through biological physics, to cosmological length scales encompassing the universe as a whole. The field generally includes two types of physicists: experimental physicists who specialize in the observation of natural phenomena and the development and analysis of experiments, and theoretical physicists who specialize in mathematical modeling of physical systems to rationalize, explain and predict natural phenomena. Physicists can apply their knowledge towards solving practical problems or to developing new technologies (also known as applied physics or engineering physics). History The study and practice of physics is based on an intellectual ladder of discoveries and insights from ancient times to the present. Many mathematical and physical ideas used today found their earliest expression in the work of ancient civilizations, such as the Babylonian astronomers and Egyptian engineers, the Greek philosophers of science and mathematicians such as Thales of Miletus, Euclid in Ptolemaic Egypt, Archimedes of Syracuse and Aristarchus of Samos. Roots also emerged in ancient Asian cultures such as India and China, and particularly the Islamic medieval period, which saw the development of scientific methodology emphasising experimentation, such as the work of Ibn al-Haytham (Alhazen) in the 11th century. The modern scientific worldview and the bulk of physics education can be said to flow from the scientific revolution in Europe, starting with the work of astronomer Nicolaus Copernicus leading to the physics of Galileo Galilei and Johannes Kepler in the early 1600s. The work on mechanics, along with a mathematical treatment of physical systems, was further developed by Christiaan Huygens and culminated in Newton's laws of motion and Newton's law of universal gravitation by the end of the 17th century. The experimental discoveries of Faraday and the theory of Maxwell's equations of electromagnetism were developmental high points during the 19th century. Many physicists contributed to the development of quantum mechanics in the early-to-mid 20th century. New knowledge in the early 21st century includes a large increase in understanding physical cosmology. The broad and general study of nature, natural philosophy, was divided into several fields in the 19th century, when the concept of "science" received its modern shape. Specific categories emerged, such as "biology" and "biologist", "physics" and "physicist", "chemistry" and "chemist", among other technical fields and titles. The term physicist was coined by William Whewell (also the originator of the term "scientist") in his 1840 book The Philosophy of the Inductive Sciences. Education A standard undergraduate physics curriculum consists of classical mechanics, electricity and magnetism, non-relativistic quantum mechanics, optics, statistical mechanics and thermodynamics, and laboratory experience. Physics students also need training in mathematics (calculus, differential equations, linear algebra, complex analysis, etc.), and in computer science. Any physics-oriented career position requires at least an undergraduate degree in physics or applied physics, while career options widen with a master's degree like MSc, MPhil, MPhys or MSci. For research-oriented careers, students work toward a doctoral degree specializing in a particular field. Fields of specialization include experimental and theoretical astrophysics, atomic physics, biological physics, chemical physics, condensed matter physics, cosmology, geophysics, gravitational physics, material science, medical physics, microelectronics, molecular physics, nuclear physics, optics, particle physics, plasma physics, quantum information science, and radiophysics. Careers The three major employers of career physicists are academic institutions, laboratories, and private industries, with the largest employer being the last. Physicists in academia or government labs tend to have titles such as Assistants, Professors, Sr./Jr. Scientist, or postdocs. As per the American Institute of Physics, some 20% of new physics Ph.D.s holds jobs in engineering development programs, while 14% turn to computer software and about 11% are in business/education. A majority of physicists employed apply their skills and training to interdisciplinary sectors (e.g. finance). Job titles for graduate physicists include Agricultural Scientist, Air Traffic Controller, Biophysicist, Computer Programmer, Electrical Engineer, Environmental Analyst, Geophysicist, Medical Physicist, Meteorologist, Oceanographer, Physics Teacher/Professor/Researcher, Research Scientist, Reactor Physicist, Engineering Physicist, Satellite Missions Analyst, Science Writer, Stratigrapher, Software Engineer, Systems Engineer, Microelectronics Engineer, Radar Developer, Technical Consultant, etc. The majority of Physics terminal bachelor's degree holders are employed in the private sector. Other fields are academia, government and military service, nonprofit entities, labs and teaching. Typical duties of physicists with master's and doctoral degrees working in their domain involve research, observation and analysis, data preparation, instrumentation, design and development of industrial or medical equipment, computing and software development, etc. Honors and awards The highest honor awarded to physicists is the Nobel Prize in Physics, awarded since 1901 by the Royal Swedish Academy of Sciences. National physical societies have many prizes and awards for professional recognition. In the case of the American Physical Society, as of 2023, there are 25 separate prizes and 33 separate awards in the field. Professional certification United Kingdom Chartered Physicist (CPhys) is a chartered status and a professional qualification awarded by the Institute of Physics. It is denoted by the postnominals "CPhys". Achieving chartered status in any profession denotes to the wider community a high level of specialised subject knowledge and professional competence. According to the Institute of Physics, holders of the award of the Chartered Physicist (CPhys) demonstrate the "highest standards of professionalism, up-to-date expertise, quality and safety" along with "the capacity to undertake independent practice and exercise leadership" as well as "commitment to keep pace with advancing knowledge and with the increasing expectations and requirements for which any profession must take responsibility". Chartered Physicist is considered to be equal in status to Chartered Engineer, which the IoP also awards as a member of the Engineering Council UK, and other chartered statuses in the UK. It is also considered a "regulated profession" under the European professional qualification directives. Canada The Canadian Association of Physicists can appoint an official designation called Professional Physicist (P. Phys.), similar to the designation of Professional Engineer (P. Eng.). This designation was unveiled at the CAP congress in 1999 and already more than 200 people carry this distinction. To get the certification, at minimum proof of honours bachelor or higher degree in physics or a closely related discipline must be provided. Also, the physicist must have completed, or be about to complete, three years of recent physics-related work experience after graduation. And, unless exempted, a professional practice examination must also be passed. An exemption can be granted to a candidate that has practiced physics for at least seven years and provide a detailed description of their professional accomplishments which clearly demonstrate that the exam is not necessary. Work experience will be considered physics-related if it uses physics directly or significantly uses the modes of thought (such as the approach to problem-solving) developed in your education or experience as a physicist, in all cases regardless of whether the experience is in academia, industry, government, or elsewhere. Management of physics-related work qualifies, and so does appropriate graduate student work. South Africa The South African Institute of Physics also delivers a certification of Professional Physicist (Pr.Phys). At a minimum, the owner must possess a three-year bachelors or equivalent degree in physics or a related field and an additional minimum of six years' experience in a physics-related activity; or an Honor or equivalent degree in physics or a related field and an additional minimum of five years' experience in a physics-related activity; or master or equivalent degree in physics or a related field and an additional minimum of three years' experience in a physics-related activity; a Doctorate or equivalent degree in Physics or a related field; or training or experience which, in the opinion of the Council, is equivalent to any of the above. Professional societies Physicists may be a member of a physical society of a country or region. Physical societies commonly publish scientific journals, organize physics conferences and award prizes for contributions to the field of physics. Some examples of physical societies are the American Physical Society, the Institute of Physics, with the oldest physical society being the German Physical Society.
Physical sciences
Physics basics: General
Physics
23287
https://en.wikipedia.org/wiki/Passport
Passport
A passport is an official travel document issued by a government that certifies a person's identity and nationality for international travel. A passport allows its bearer to enter and temporarily reside in a foreign country, access local aid and protection, and obtain consular assistance from their government. In addition to facilitating travel, passports are a key mechanism for border security and regulating migration; they may also serve as official identification for various domestic purposes. State-issued travel documents have existed in some form since antiquity; the modern passport was universally adopted and standardized in 1920. The passport takes the form of a booklet bearing the official name and emblem of the issuing government and containing the biographical information of the individual, including their full name, photograph, place and date of birth, and signature. A passport does not create any rights in the country being visited nor impose any obligation on the issuing country; rather, it provides certification to foreign government officials of the holder's identity and right to travel, with pages available for inserting entry and exit stamps and travel visas—endorsements that allow the individual to enter and temporarily reside in a country for a period of time and under certain conditions. Since 1998, many countries have transitioned to biometric passports, which contain an embedded microchip to facilitate authentication and safeguard against counterfeiting. As of July 2024, over 150 jurisdictions issue such "e-passports"; previously issued non-biometric passports usually remain valid until expiration. Eligibility for a passport varies by jurisdiction, although citizenship is a common prerequisite. However, a passport may be issued to individuals who do not have the status or full rights of citizenship, such as American or British nationals. Likewise, certain classes of individuals, such as diplomats and government officials, may be issued special passports that provide certain rights and privileges, such as immunity from arrest or prosecution. While passports are typically issued by national governments, certain subnational entities are authorised to issue passports to citizens residing within their borders. Additionally, other types of official documents may serve a similar role to passports but are subject to different eligibility requirements, purposes, or restrictions. History Etymology and origin Etymological sources show that the term "passport" may derive from a document required by some medieval Italian states in order for an individual to pass through the physical harbor (Italian passa porto, "to pass the harbor") or gate (Italian passa porte, "to pass the gates") of a walled city or jurisdiction. Such documents were issued by local authorities to foreign travellers—as opposed to local citizens, as is the modern practice—and generally contained a list of towns and cities the document holder was permitted to enter or pass through. On the whole, documents were not required for travel to seaports, which were considered open trading points, but documents were required to pass harbor controls and travel inland from seaports. The transition from private to state control over movement was an essential aspect of the transition from feudalism to capitalism. Communal obligations to provide poor relief were an important source of the desire for controls on movement.:10 Antecedents One of the earliest known references to paperwork that served an analogous role to a passport is found in the Hebrew Bible. Nehemiah 2:7–9, dating from approximately 450 BC, states that Nehemiah, an official serving King Artaxerxes I of Persia, asked permission to travel to Judea; the king granted leave and gave him a letter "to the governors beyond the river" requesting safe passage for him as he traveled through their lands. The ancient Indian political text Arthashastra (third century BCE) mentions passes issued at the rate of one masha per pass to enter and exit the country, and describes the duties of the () who must issue sealed passes before a person could enter or leave the countryside. Passports were an important part of the Chinese bureaucracy as early as the Western Han (202 BC – 9 AD), if not in the Qin dynasty. They required such details as age, height, and bodily features. These passports () determined a person's ability to move throughout imperial counties and through points of control. Even children needed passports, but those of one year or less who were in their mother's care may not have needed them. In the medieval Islamic Caliphate, a form of passport was the bara'a, a receipt for taxes paid. Only people who paid their zakah (for Muslims) or jizya (for dhimmis) taxes were permitted to travel to different regions of the Caliphate; thus, the bara'a receipt was a "basic passport". In the 12th century, the Republic of Genoa issued a document called Bulletta, which was issued to the nationals of the Republic who were traveling to the ports of the emporiums and the ports of the Genoese colonies overseas, as well as to foreigners who entered them. King Henry V of England is credited with having invented what some consider the first British passport in the modern sense, as a means of helping his subjects prove who they were in foreign lands. The earliest reference to these documents is found in a 1414 Act of Parliament. In 1540, granting travel documents in England became a role of the Privy Council of England, and it was around this time that the term "passport" was used. In 1794, issuing British passports became the job of the Office of the Secretary of State. In the Holy Roman Empire, the 1548 Imperial Diet of Augsburg required the public to hold imperial documents for travel, at the risk of permanent exile. In 1791, Louis XVI masqueraded as a valet during his Flight to Varennes as passports for the nobility typically included a number of persons listed by their function but without further description.:31–32 A Pass-Card Treaty of October 18, 1850 among German states standardized information including issuing state, name, status, residence, and description of bearer. Tramping journeymen and jobseekers of all kinds were not to receive pass-cards.:92–93 Modern development A rapid expansion of railway infrastructure and wealth in Europe beginning in the mid-nineteenth century led to large increases in the volume of international travel and a consequent unique dilution of the passport system for approximately thirty years prior to World War I. The speed of trains, as well as the number of passengers that crossed multiple borders, made enforcement of passport laws difficult. The general reaction was the relaxation of passport requirements. In the later part of the nineteenth century and up to World War I, passports were not required, on the whole, for travel within Europe, and crossing a border was a relatively straightforward procedure. Consequently, comparatively few people held passports. During World War I, European governments introduced border passport requirements for security reasons, and to control the emigration of people with useful skills. These controls remained in place after the war, becoming a standard, though controversial, procedure. British tourists of the 1920s complained, especially about attached photographs and physical descriptions, which they considered led to a "nasty dehumanisation". The British Nationality and Status of Aliens Act was passed in 1914, clearly defining the notions of citizenship and creating a booklet form of the passport. In 1920, the League of Nations held a conference on passports, the Paris Conference on Passports & Customs Formalities and Through Tickets. Passport guidelines and a general booklet design resulted from the conference, which was followed up by conferences in 1926 and 1927. The League of Nations issued Nansen passports to stateless refugees from 1922 to 1938. While the United Nations held a travel conference in 1963, no passport guidelines resulted from it. Passport standardization came about in 1980, under the auspices of the ICAO. ICAO standards include those for machine-readable passports. Such passports have an area where some of the information otherwise written in textual form is written as strings of alphanumeric characters, printed in a manner suitable for optical character recognition. This enables border controllers and other law enforcement agents to process these passports more quickly, without having to input the information manually into a computer. ICAO publishes Doc 9303 Machine Readable Travel Documents, the technical standard for machine-readable passports. A more recent standard is for biometric passports. These contain biometrics to authenticate the identity of travellers. The passport's critical information is stored on a tiny RFID computer chip, much like information stored on smartcards. Like some smartcards, the passport booklet design calls for an embedded contactless chip that is able to hold digital signature data to ensure the integrity of the passport and the biometric data. Historically, legal authority to issue passports is founded on the exercise of each country's executive discretion. Certain legal tenets follow, namely: first, passports are issued in the name of the state; second, no person has a legal right to be issued a passport; third, each country's government, in exercising its executive discretion, has complete and unfettered discretion to refuse to issue or to revoke a passport; and fourth, that the latter discretion is not subject to judicial review. However, legal scholars including A.J. Arkelian have argued that evolutions in both the constitutional law of democratic countries and the international law applicable to all countries now render those historical tenets both obsolete and unlawful. Types Governments around the world issue a variety of passports for different purposes. The most common variety are ordinary passports issued to individual citizens and other nationals. In the past, certain countries issued collective passports or family passports. Today, passports are typically issued to individual travellers rather than groups. Aside from ordinary passports issued to citizens by national governments, there are a variety of other types of passports by governments in specific circumstances. While individuals are typically only permitted to hold one passport, certain governments permit citizens to hold more than one ordinary passport. Individuals may also simultaneously hold an ordinary passport and an official or diplomatic passport. Emergency passport Emergency passports (also called temporary passports) are issued to persons with urgent need to travel who do not have passports, e.g. someone abroad whose passport has been lost or stolen who needs to travel home within a few days, someone whose passport expires abroad, or someone who urgently needs to travel abroad who does not have a passport with sufficient validity. These passports are intended for very short durations, e.g. to allow immediate one-way travel back to the home country. Laissez-passer are also used for this purpose. Uniquely, the United Kingdom issues emergency passports to citizens of certain Commonwealth states who lose their passports in non-Commonwealth countries where their home state does not maintain a diplomatic or consular mission. Diplomatic and official passports Pursuant to the Vienna Convention on Diplomatic Relations, Vienna Convention on Consular Relations, and the immunity afforded to officials of a foreign state under customary international law, diplomats and other individuals travelling on government business are entitled to reduced scrutiny at border checkpoints when travelling overseas. Consequently, such individuals are typically issued special passports indicating their status. These passports come in three distinct varieties: Diplomatic passports Typically issued to accredited diplomats, senior consular staff, heads of state or government, and to senior foreign ministry employees. Individuals holding diplomatic passports are usually entitled to certain degrees of immunity from border control inspections, depending on their home countries and their countries of entry. Service/official passports Issued to senior government officials travelling on state business who are not eligible for diplomatic passports. Holders of official passports are typically entitled to similar immunity from border control inspections. In the United States of America, official and service passports are two distinct categories of passport, with official passports being issued to senior government officials while service passports are issued to government contractors. Public affairs passports Issued to Chinese citizens holding senior positions in state-owned companies. While public affairs passports do not usually entitle their bearers to exemption from searches at border checkpoints, they are subject to more liberal visa policies in several countries primarily in Africa and Asia (see: Visa requirements for Chinese citizens). Passports without right of abode Unlike most countries, the United Kingdom and the Republic of China issue various categories of passports to individuals without the right of abode in their territory. In the United Kingdom's case, these passports are typically issued to individuals connected with a former British colony while, in the ROC's case, these passports are the result of the legal distinction between ROC nationals with and without residence in the area it administers. In both cases, holders of such passports are able to obtain residence on an equal footing with foreigners by applying for indefinite leave to remain (UK) or a resident certificate (ROC). Republic of China (Taiwan) A Republic of China citizen who does not have household registration () in the area administered by the ROC is classified as a National Without Household Registration (NWOHR; ) and is subject to immigration controls when clearing ROC border controls, does not have automatic residence rights, and cannot vote in Taiwanese elections. However, they are exempt from conscription. Most individuals with this status are children born overseas to ROC citizens who do hold household registration. Additionally, because the ROC observes the principle of jus sanguinis, members of the overseas Chinese community are also regarded as citizens. During the Cold War, both the ROC and PRC governments actively sought the support of overseas Chinese communities in their attempts to secure the position as the legitimate sole government of China. The ROC also encouraged overseas Chinese businessmen to settle in Taiwan to facilitate economic development and regulations concerning evidence of ROC nationality by descent were particularly lax during the period, allowing many overseas Chinese the right to settle in Taiwan. About 60,000 NWOHRs currently hold Taiwanese passports with this status. United Kingdom The United Kingdom issues several similar but distinct passports which correspond to the country's several categories of nationality. Full British citizens are issued a standard British passport. British citizens resident in the Crown Dependencies may hold variants of the British passport which confirm their Isle of Man, Jersey, or Guernsey identity. Many of the other categories of nationality do not grant bearers right of abode in the United Kingdom itself. British National (Overseas) passports are issued to individuals connected to Hong Kong prior to its return to China. British Overseas Citizen passports are primarily issued to individuals who did not acquire the citizenship of the colony they were connected to when it obtained independence (or their stateless descendants). British Overseas Citizen passports are also issued to certain categories of Malaysian nationals in Penang and Malacca, and individuals connected to Cyprus as a result of the legislation granting independence to those former British colonies. British Protected Person passports are issued to otherwise stateless people connected to a former British protectorate. British subject passports are issued to otherwise stateless individuals connected to British India or to certain categories of Irish citizens (though, in the latter case, they do convey right of abode). Additionally, individuals connected to a British overseas territory are accorded British Overseas Territories citizenship and may hold passports issued by the governments of their respective territory. All overseas territory citizens are also now eligible for full British citizenship. Each territory maintains its own criteria for determining whom it grants right of abode. Consequently, individuals holding BOTC passports are not necessarily entitled to enter or reside in the territory that issued their passport. Most countries distinguish between BOTC and other classes of British nationality for border control purposes. For instance, only Bermudian passport holders with an endorsement stating that they possess right of abode or belonger status in Bermuda are entitled to enter America without an electronic travel authorisation. Border control policies in many jurisdictions distinguish between holders of passports with and without right of abode, including NWOHRs and holders of the various British passports the do not confer right of abode upon the bearer. Certain jurisdictions may additionally distinguish between holders of such British passports with and without indefinite leave to remain in the United Kingdom. NWOHRs do not, for instance, have access to the Visa Waiver Program, or to visa free access to the Schengen Area or Japan. Other countries, such as India which allows all Chinese nationals to apply for eVisas, do not make such a distinction. Notably, while Singapore does permit visa free entry to all categories of British passport holders, it reduces length of stay for British nationals without right of abode in the United Kingdom, but does not distinguish between ROC passport holders with and without household registration. Until 31 January 2021, holders of British National (Overseas) passports were able to use their UK passports for immigration clearance in Hong Kong and to seek consular protection from overseas Chinese diplomatic missions. This was a unique arrangement as it involved a passport issued by one state conferring right of abode (or, more precisely right to land) in and consular protection from another state. Since that date, the Chinese and Hong Kong governments have prohibited the use of BN(O) passports as travel documents or proof of identity and it; much like British Overseas Citizen, British Protected Person, or ROC NWOHR passports; is not associated with right of abode in any territory. BN(O)s who do not possess Chinese (or any other) nationality are required to use a Document of Identity for Visa Purposes for travel. This restriction disproportionally affects ease of travel for permanent residents of Indian, Pakistani, and Nepali ethnicity, who were not granted Chinese nationality in 1997. As an additional consequence, Hongkongers seeking early pre-retirement withdrawals from the Mandatory Provident Fund pension scheme may not use BN(O) passports for identity verification. Latvia and Estonia Similarly, non-citizens in Latvia and in Estonia are individuals, primarily of Russian or Ukrainian ethnicity, who are not citizens of Latvia or Estonia but whose families have resided in the area since the Soviet occupation, and thus have the right to a special non-citizen passport issued by the government as well as some other specific rights. Approximately two thirds of them are ethnic Russians, followed by ethnic Belarusians, ethnic Ukrainians, ethnic Poles and ethnic Lithuanians. According to the UN Special Rapporteur, the citizenship and naturalization laws in Latvia "are seen by the Russian community as discriminatory practices". Per Russian visa policy, holders of the Estonian alien's passport or the Latvian non-citizen passport are entitled to visa free entry to Russia, in contrast to Estonian and Latvian citizens who must obtain an electronic visa. Regional and subnational passports China The People's Republic of China (PRC) authorises its Special Administrative Regions of Hong Kong and Macau to issue passports to their permanent residents with Chinese nationality under the "one country, two systems" arrangement. Visa policies imposed by foreign authorities on Hong Kong and Macau permanent residents holding such passports are different from those holding ordinary passports of the People's Republic of China. A Hong Kong Special Administrative Region passport (HKSAR passport) and Macau Special Administrative Region passport (MSAR passport) gain visa-free access to many more countries than ordinary PRC passports. On 1 July 2011, the Ministry of Foreign Affairs of the People's Republic of China launched a trial issuance of e-passports for individuals conducting public affairs work overseas on behalf of the Chinese government. The face, fingerprints, and other biometric features of the passport holder is digitized and stored in pre-installed contactless smart chip, along with "the passport owner's name, sex and personal photo as well as the passport's term of validity and [the] digital certificate of the chip". Ordinary biometric passports were introduced by the Ministry of Public Security on 15 May 2012. As of January 2015, all new passports issued by China are biometric e-passports, and non-biometric passports are no longer issued. In 2012, over 38 million Chinese citizens held ordinary passports, comprising only 2.86 percent of the total population at the time. In 2014, China issued 16 million passports, ranking first in the world, surpassing the United States (14 million) and India (10 million). The number of ordinary passports in circulation rose to 120 million by October 2016, which was approximately 8.7 percent of the population. As of April 2017 to date, China had issued over 100 million biometric ordinary passports. Kingdom of Denmark The three constituent countries of the Danish Realm have a common nationality. Denmark proper is a member of the European Union, but Greenland and Faroe Islands are not. Danish citizens residing in Greenland or Faroe Islands can choose between holding a Danish EU passport and a Greenlandic or Faroese non-EU Danish passport. As of 21 September 2022, Danish citizens had visa-free or visa on arrival access to 188 countries and territories, thus ranking the Danish passport fifth in the world (tied with the passports of Austria, the Netherlands, and Sweden) according to the Henley Passport Index. According to the World Tourism Organization 2016 report, the Danish passport is first in the world (tied with Finland, Germany, Italy, Luxembourg, Singapore, and the United Kingdom) in terms of travel freedom, with the mobility index of 160 (out of 215 with no visa weighted by 1, visa on arrival weighted by 0.7, eVisa by 0.5 and traditional visa weighted by 0). Serbian Coordination Directorate Passports in Kosovo Under Serbian law, people born or otherwise legally settled in Kosovo are considered Serbian nationals and as such they are entitled to a Serbian passport. However, these passports are not issued directly by the Serbian Ministry of Internal Affairs but by the Serbian Coordination Directorate for Kosovo and Metohija instead. These particular passports do not allow the holder to enter the Schengen Area without a visa. As of August 2023, Serbian citizens had visa-free or visa on arrival access to 138 countries and territories, ranking the Serbian passport 38th overall in terms of travel freedom according to the Henley Passport Index. Serbian passport is one of the 5 passports with the most improved rating globally since 2006 in terms of number of countries that its holders may visit without a visa. American Samoa Although all U.S. citizens are also U.S. nationals, the reverse is not true. As specified in , a person whose only connection to the United States is through birth in an outlying possession (which is defined in as American Samoa and Swains Island, the latter of which is administered as part of American Samoa), or through descent from a person so born, acquires U.S. nationality but not the citizenship. This was formerly the case in a few other current or former U.S. overseas possessions, i.e. the Panama Canal Zone and Trust Territory of the Pacific Islands. The passport issued to non-citizen nationals contains the endorsement code 9 which states: "THE BEARER IS A UNITED STATES NATIONAL AND NOT A UNITED STATES CITIZEN." on the annotations page. Non-citizen nationals may reside and work in the United States without restrictions, and may apply for citizenship under the same rules as resident aliens. Like resident aliens, they are not presently allowed by any U.S. state to vote in federal or state elections. Passports issued by entities without sovereign territory Several entities without a sovereign territory issue documents described as passports, most notably Iroquois League, the Aboriginal Provisional Government in Australia and the Sovereign Military Order of Malta. Such documents are not necessarily accepted for entry into a country. Details and specifications Criteria for issuance Each country sets its own conditions for the issue of passports. Under the law of most countries, passports are government property, and may be limited or revoked at any time, usually on specified grounds, and possibly subject to judicial review. In many countries, surrender of one's passport is a condition of granting bail in lieu of imprisonment for a pending criminal trial due to the risk of the person leaving the country. When passport holders apply for a new passport (commonly, due to expiration of the previous passport, insufficient validity for entry to some countries or lack of blank pages), they may be required to surrender the old passport for invalidation. In some circumstances an expired passport is not required to be surrendered or invalidated (for example, if it contains an unexpired visa). Requirements for passport applicants vary significantly from country to country, with some states imposing stricter measures than others. For example, Pakistan requires applicants to be interviewed before a Pakistani passport will be granted. When applying for a passport or a national ID card, all Pakistanis are required to sign an oath declaring Mirza Ghulam Ahmad to be an impostor prophet and all Ahmadis to be non-Muslims. In contrast, individuals holding British National (Overseas) status are legally entitled to hold a passport in that capacity. Countries with conscription or national service requirements may impose restrictions on passport applicants who have not yet completed their military obligations. For example, in Finland, male citizens aged 18–30 years must prove that they have completed, or are exempt from, their obligatory military service to be granted an unrestricted passport; otherwise a passport is issued valid only until the end of their 28th year, to ensure that they return to carry out military service. Other countries with obligatory military service, such as South Korea and Syria, have similar requirements, e.g. South Korean passport and Syrian passport. Validity Passports have a limited validity, usually between 5 and 10 years. Many countries require passports to be valid for a minimum of six months beyond the planned date of departure, as well as having at least two to four blank pages. It is recommended that a passport be valid for at least six months from the departure date as many airlines deny boarding to passengers whose passport has a shorter expiry date, even if the destination country does not have such a requirement for incoming visitors. There is an increasing trend for adult passports to be valid for ten years, such as a United Kingdom passport, United States Passport, New Zealand Passport (after 30 November 2015) or Australian passport. Some countries issue passports that valid for longer than 10 years, which ICAO does not recommend due to the security concerns and even some countries including all member states of the European Union do not accept passports older than 10 years. Cover designs Passport booklets from almost all countries around the world display the national coat of arms of the issuing country on the front cover. The United Nations keeps a record of national coats of arms, but displaying a coat of arms is not an internationally recognised requirement for a passport. There are several groups of countries that have, by mutual agreement, adopted common designs for their passports: The European Union. The design and layout of passports of the member states of the European Union are a result of consensus and recommendation, rather than of directive. Passports are issued by member states and may consist of either the usual passport booklet or the newer passport card format. The covers of ordinary passport booklets are burgundy-red (except for Croatia which has a blue cover), with "European Union" written in the national language or languages. Below that are the name of the country, the national coat of arms, the word or words for "passport", and, at the bottom, the symbol for a biometric passport. The data page can be at the front or at the back of a passport booklet and there are significant design differences throughout to indicate which member state is the issuer. Member states that participate in the Schengen Agreement have agreed that their e-passports should contain fingerprint information in the chip. In 2006, the members of the CA-4 Treaty (Guatemala, El Salvador, Honduras, and Nicaragua) adopted a common-design passport, called the Central American passport, following a design already in use by Nicaragua and El Salvador since the mid-1990s. It features a navy-blue cover with the words "América Central" and a map of Central America, and with the territory of the issuing country highlighted in gold (in place of the individual nations' coats of arms). At the bottom of the cover are the name of the issuing country and the passport type. The members of the Andean Community of Nations (Bolivia, Colombia, Ecuador, and Peru) began to issue commonly designed passports in 2005. Specifications for the common passport format were outlined in an Andean Council of Foreign Ministers meeting in 2002. Previously issued national passports will be valid until their expiry dates. Andean passports are bordeaux (burgundy-red), with words in gold. Centred above the national seal of the issuing country is the name of the regional body in Spanish (Comunidad Andina). Below the seal is the official name of the member country. At the bottom of the cover is the Spanish word "pasaporte" along with the English "passport". Venezuela had issued Andean passports, but has subsequently left the Andean Community, so they will no longer issue Andean passports. The Union of South American Nations had signaled an intention to establish a common passport design, but it is doubtful that this will happen since the group effectively broke up in 2019. Twelve member states of the Caribbean Community (CARICOM) began issuing passports with a common design since early 2009. It features the CARICOM symbol along with the national coat of arms and name of the member state, rendered in a CARICOM official language (English, French, Dutch). The member states which use the common design are Antigua and Barbuda, Barbados, Belize, Dominica, Grenada, Guyana, Jamaica, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Suriname, and Trinidad and Tobago. There was a movement by the Organisation of Eastern Caribbean States (OECS) to issue a common designed passport, but the implementation of the CARICOM passport made that redundant, and it was abandoned. Request page Passports sometimes contain a message, usually near the front, requesting that the passport's bearer be allowed to pass freely, and further requesting that, in the event of need, the bearer be granted assistance. The message is sometimes made in the name of the government or the head of state, and may be written in more than one language, depending on the language policies of the issuing authority. Languages In 1920, an international conference on passports and through tickets held by the League of Nations recommended that passports be issued in the French language, historically the language of diplomacy, and one other language. Currently, the ICAO recommends that passports be issued in English, French, and Spanish; or in the national language of the issuing country and in either English, French, or Spanish. Many European countries use their national language, along with English and French. Some additional language combinations are: National passports of the European Union bear all of the official languages of the European Union. Two or three languages are printed at the relevant points, followed by reference numbers which point to the passport page where translations into the remaining languages appear. Algerian, Chadian, Lebanese, Mauritanian, Moroccan and Tunisian passports are in three languages: Arabic, English, and French. The Barbadian passport and the United States passport are tri-lingual: English, French and Spanish. United States passports were English and French since 1976, but began being printed with a Spanish message and labels during the late 1990s, in recognition of Puerto Rico's Spanish-speaking status. Since 2007, the Data Page, which contains photo, identifying information, and the passport's issuance and expiration dates, and the Personal Data and Emergency Contact page are written in English, French, and Spanish; the cover and instructions pages are printed solely in English. On Belgian passports, all three official languages (Dutch, French, German) appear on the cover, in addition to English on the main page. The order of the official languages depends on the official residence of the holder. Passports of Bosnia and Herzegovina are in the three official languages of Bosnian, Serbian and Croatian in addition to English. Brazilian passports contain four languages: Portuguese, the official country language; Spanish, because of bordering nations; English and French. British passports bear English and French on the information page and Spanish, Welsh, Irish and Scottish Gaelic translations on an extra page. Cypriot passports are in Greek, Turkish and English. Haitian passports are in French and Haitian Creole. Passports issued by the Holy See are in Latin (the language of the Catholic Church), French, and English. The first page of the old Libyan passport (green cover) was in Arabic only. The current passport has dark-blue cover, is electronically readable, and has Arabic with English translation in the first page (first page from a right-to-left script viewpoint). Similar arrangements are found in the passports of some other Arab countries. Iraqi passports are in Arabic, Kurdish and English. Macau SAR passports are in three languages: Chinese (in traditional Chinese characters), Portuguese and English. New Zealand passports are in English and te reo Māori. Norwegian passports are in the two forms of the Norwegian language, Bokmål and Nynorsk, Northern Sami and English. Sri Lankan passports are in Sinhala, Tamil and English. Swiss passports are in five languages: German, French, Italian, Romansh and English. Limitations on use A passport is merely an identity document that is widely recognised for international travel purposes, and the possession of a passport does not in itself entitle a traveller to enter any country other than the country that issued it, and sometimes not even then, as with holders of the British Overseas citizen passport. Many countries normally require visitors to obtain a visa. Each country has different requirements or conditions for the grant of visas, such as for the visitor not being likely to become a public charge for financial, health, family, or other reasons, and the holder not having been convicted of a crime or considered likely to commit one. Where a country does not recognise another, or is in dispute with it, entry may be prohibited to holders of passports of the other party to the dispute, and sometimes to others who have, for example, visited the other country; examples are listed below. A country that issues a passport may also restrict its validity or use in specified circumstances, such as use for travel to certain countries for political, security, or health reasons. Many nations implement border controls restricting the entry of people of certain nationalities or who have visited certain countries. For instance, Georgia refuses entry to holders of passports issued by the Republic of China. Similarly, since April 2017, nationals of Bangladesh, Pakistan, Sudan, Syria, Yemen, and Iran have been banned from entering the parts of eastern Libya under the control of the Tobruk government. The Pakistani passports explicitly mention that these passports are valid in all countries except Israel. The majority of Arab countries, as well as Iran and Malaysia, ban Israeli citizens; however, exceptional entry to Malaysia is possible with approval from the Ministry of Home Affairs. Certain countries may also restrict entry to those with Israeli stamps or visas in their passports. As a result of tension over the former Republic of Artsakh dispute, Azerbaijan currently forbids entry to Armenian citizens as well as to individuals with proof of travel to Artsakh. Between September 2017 and January 2021, the United States of America did not issue new visas to nationals of Iran, North Korea, Libya, Somalia, Syria, or Yemen pursuant to restrictions imposed by the Trump administration, which were subsequently repealed by the Biden administration on 20 January 2021. While in force, the restrictions were conditional and could be lifted if the countries affected meet the required security standards specified by the Trump administration, and dual citizens of these countries could still enter if they presented a passport from a non-designated country. Value One method by which to rank the value of a passport is to calculate its mobility score (MS). The mobility score of a passport is the number of countries that allow the holder of that passport to enter for general tourism visa-free, visa-on-arrival, eTA, or eVisa issued within 3 days. As of 2023, the strongest passport in the world is the Singaporean passport. However, another way to determine passport mobility score is the number of countries it allows holders to live and work in. For example, by this measure, the Irish passport would be most powerful because it allows the holder to live in all European Union/European Economic Area countries, as well as Switzerland and the United Kingdom, as the Irish passport is the only European Union passport now that still allows its users the right to live/work in the United Kingdom. Passport issuance volumes
Technology
Basics_11
null
23291
https://en.wikipedia.org/wiki/Pliocene
Pliocene
The Pliocene ( ; also Pleiocene) is the epoch in the geologic time scale that extends from 5.33 to 2.58 million years ago (Ma). It is the second and most recent epoch of the Neogene Period in the Cenozoic Era. The Pliocene follows the Miocene Epoch and is followed by the Pleistocene Epoch. Prior to the 2009 revision of the geologic time scale, which placed the four most recent major glaciations entirely within the Pleistocene, the Pliocene also included the Gelasian Stage, which lasted from 2.59 to 1.81 Ma, and is now included in the Pleistocene. As with other older geologic periods, the geological strata that define the start and end are well-identified but the exact dates of the start and end of the epoch are slightly uncertain. The boundaries defining the Pliocene are not set at an easily identified worldwide event but rather at regional boundaries between the warmer Miocene and the relatively cooler Pleistocene. The upper boundary was set at the start of the Pleistocene glaciations. Etymology Charles Lyell (later Sir Charles) gave the Pliocene its name in Principles of Geology (volume 3, 1833). The word pliocene comes from the Greek words (, "more") and (, "new" or "recent") and means roughly "continuation of the recent", referring to the essentially modern marine mollusc fauna. Subdivisions In the official timescale of the ICS, the Pliocene is subdivided into two stages. From youngest to oldest they are: Piacenzian (3.60–2.58 Ma) Zanclean (5.33–3.60 Ma) The Piacenzian is sometimes referred to as the Late Pliocene, whereas the Zanclean is referred to as the Early Pliocene. In the system of North American Land Mammal Ages (NALMA) include Hemphillian (9–4.75 Ma), and Blancan (4.75–1.6 Ma). The Blancan extends forward into the Pleistocene. South American Land Mammal Ages (SALMA) include Montehermosan (6.8–4.0 Ma), Chapadmalalan (4.0–3.0 Ma) and Uquian (3.0–1.2 Ma). In the Paratethys area (central Europe and parts of western Asia) the Pliocene contains the Dacian (roughly equal to the Zanclean) and Romanian (roughly equal to the Piacenzian and Gelasian together) stages. As usual in stratigraphy, there are many other regional and local subdivisions in use. In Britain, the Pliocene is divided into the following stages (old to young): Gedgravian, Waltonian, Pre-Ludhamian, Ludhamian, Thurnian, Bramertonian or Antian, Pre-Pastonian or Baventian, Pastonian and Beestonian. In the Netherlands the Pliocene is divided into these stages (old to young): Brunssumian C, Reuverian A, Reuverian B, Reuverian C, Praetiglian, Tiglian A, Tiglian B, Tiglian C1-4b, Tiglian C4c, Tiglian C5, Tiglian C6 and Eburonian. The exact correlations between these local stages and the International Commission on Stratigraphy (ICS) stages is not established. Climate During the Pliocene epoch (5.3 to 2.6 million years ago (Ma)), the Earth's climate became cooler and drier, as well as more seasonal, marking a transition between the relatively warm Miocene to the cooler Pleistocene. However, the beginning of the Pliocene was marked by an increase in global temperatures relative to the cooler Messinian. This increase was related to the 1.2 million year obliquity amplitude modulation cycle. By 3.3-3.0 Ma, during the Mid-Piacenzian Warm Period (mPWP), global average temperature was 2–3 °C higher than today, while carbon dioxide levels were the same as today (400 ppm). Global sea level was about 25 m higher, though its exact value is uncertain. The northern hemisphere ice sheet was ephemeral before the onset of extensive glaciation over Greenland that occurred in the late Pliocene around 3 Ma. The formation of an Arctic ice cap is signaled by an abrupt shift in oxygen isotope ratios and ice-rafted cobbles in the North Atlantic and North Pacific Ocean beds. Mid-latitude glaciation was probably underway before the end of the epoch. The global cooling that occurred during the Pliocene may have accelerated on the disappearance of forests and the spread of grasslands and savannas. During the Pliocene the earth climate system response shifted from a period of high frequency-low amplitude oscillation dominated by the 41,000-year period of Earth's obliquity to one of low-frequency, high-amplitude oscillation dominated by the 100,000-year period of the orbital eccentricity characteristic of the Pleistocene glacial-interglacial cycles. During the late Pliocene and early Pleistocene, 3.6 to 2.6 Ma, the Arctic was much warmer than it is at the present day (with summer temperatures some 8 °C warmer than today). That is a key finding of research into a lake-sediment core obtained in Eastern Siberia, which is of exceptional importance because it has provided the longest continuous late Cenozoic land-based sedimentary record thus far. During the late Zanclean, Italy remained relatively warm and humid. Central Asia became more seasonal during the Pliocene, with colder, drier winters and wetter summers, which contributed to an increase in the abundance of plants across the region. In the Loess Plateau, δ13C values of occluded organic matter increased by 2.5% while those of pedogenic carbonate increased by 5% over the course of the Late Miocene and Pliocene, indicating increased aridification. Further aridification of Central Asia was caused by the development of Northern Hemisphere glaciation during the Late Pliocene. A sediment core from the northern South China Sea shows an increase in dust storm activity during the middle Pliocene. The South Asian Summer Monsoon (SASM) increased in intensity after 2.95 Ma, likely because of enhanced cross-equatorial pressure caused by the reorganisation of the Indonesian Throughflow. In the south-central Andes, an arid period occurred from 6.1 to 5.2 Ma, with another occurring from 3.6 to 3.3 Ma. These arid periods are coincident with global cold periods, during which the position of the Southern Hemisphere westerlies shifted northward and disrupted the South American Low Level Jet, which brings moisture to southeastern South America. From around 3.8 Ma to about 3.3 Ma, North Africa experienced an extended humid period. In northwestern Africa, tropical forests extended up to Cape Blanc during the Zanclean until around 3.5 Ma. During the Piacenzian, from about 3.5 to 2.6 Ma, the region was forested at irregular intervals and contained a significant Saharan palaeoriver until 3.35 Ma, when trade winds began to dominate over fluvial transport of pollen. Around 3.26 Ma, a strong aridification event that was followed by a return to more humid conditions, which was itself followed by another aridification around 2.7 Ma. From 2.6 to 2.4 Ma, vegetation zones began repeatedly shifting latitudinally in response to glacial-interglacial cycles. The climate of eastern Africa was very similar to what it is today. Unexpectedly, the expansion of grasslands in eastern Africa during this epoch appears to have been decoupled from aridification and not caused by it, as evidenced by their asynchrony. Southwestern Australia hosted heathlands, shrublands, and woodlands with a greater species diversity compared to today during the Middle and Late Pliocene. Three different aridification events occurred around 2.90, 2.59, and 2.56 Ma, and may have been linked to the onset of continental glaciation in the Arctic, suggesting that vegetation changes in Australia during the Pliocene behaved similarly to during the Late Pleistocene and were likely characterised by comparable cycles of aridity and humidity. The equatorial Pacific Ocean sea surface temperature gradient was considerably lower than it is today. Mean sea surface temperatures in the east were substantially warmer than today but similar in the west. This condition has been described as a permanent El Niño state, or “El Padre.” Several mechanisms have been proposed for this pattern, including increased tropical cyclone activity. The extent of the West Antarctic Ice Sheet oscillated at the 40 kyr period of Earth's obliquity. Ice sheet collapse occurred when the global average temperature was 3 °C warmer than today and carbon dioxide concentration was at 400 ppmv. This resulted in open waters in the Ross Sea. Global sea-level fluctuation associated with ice-sheet collapse was probably up to 7 meters for the west Antarctic and 3 meters for the east Antarctic. Model simulations are consistent with reconstructed ice-sheet oscillations and suggest a progression from a smaller to a larger West Antarctic ice sheet in the last 5 million years. Intervals of ice sheet collapse were much more common in the early-mid Pliocene (5 Ma – 3 Ma), after three-million-year intervals with modern or glacial ice volume became longer and collapse occurs only at times when warmer global temperature coincide with strong austral summer insolation anomalies. Paleogeography Continents continued to drift, moving from positions possibly as far as 250 km from their present locations to positions only 70 km from their current locations. South America became linked to North America through the Isthmus of Panama during the Pliocene, making possible the Great American Interchange and bringing a nearly complete end to South America's distinctive native ungulate fauna, though other South American lineages like its predatory mammals were already extinct by this point and others like xenarthrans continued to do well afterwards. The formation of the Isthmus had major consequences on global temperatures, since warm equatorial ocean currents were cut off and an Atlantic cooling cycle began, with cold Arctic and Antarctic waters decreasing temperatures in the now-separated Atlantic Ocean. Africa's collision with Europe formed the Mediterranean Sea, cutting off the remnants of the Tethys Ocean. The border between the Miocene and the Pliocene is also the time of the Messinian salinity crisis. During the Late Pliocene, the Himalayas became less active in their uplift, as evidenced by sedimentation changes in the Bengal Fan. The land bridge between Alaska and Siberia (Beringia) was first flooded near the start of the Pliocene, allowing marine organisms to spread between the Arctic and Pacific Oceans. The bridge would continue to be periodically flooded and restored thereafter. Pliocene marine formations are exposed in northeast Spain, southern California, New Zealand, and Italy. During the Pliocene parts of southern Norway and southern Sweden that had been near sea level rose. In Norway this rise elevated the Hardangervidda plateau to 1200 m in the Early Pliocene. In Southern Sweden similar movements elevated the South Swedish highlands leading to a deflection of the ancient Eridanos river from its original path across south-central Sweden into a course south of Sweden. Environment and evolution of human ancestors The Pliocene is bookended by two significant events in the evolution of human ancestors. The first is the appearance of the hominin Australopithecus anamensis in the early Pliocene, around 4.2 million years ago. The second is the appearance of Homo, the genus that includes modern humans and their closest extinct relatives, near the end of the Pliocene at 2.6 million years ago. Key traits that evolved among hominins during the Pliocene include terrestrial bipedality and, by the end of the Pliocene, encephalized brains (brains with a large neocortex relative to body mass and stone tool manufacture. Improvements in dating methods and in the use of climate proxies have provided scientists with the means to test hypotheses of the evolution of human ancestors. Early hypotheses of the evolution of human traits emphasized the selective pressures produced by particular habitats. For example, many scientists have long favored the savannah hypothesis. This proposes that the evolution of terrestrial bipedality and other traits was an adaptive response to Pliocene climate change that transformed forests into more open savannah. This was championed by Grafton Elliot Smith in his 1924 book, The Evolution of Man, as "the unknown world beyond the trees", and was further elaborated by Raymond Dart as the killer ape theory. Other scientists, such as Sherwood L. Washburn, emphasized an intrinsic model of hominin evolution. According to this model, early evolutionary developments triggered later developments. The model placed little emphasis on the surrounding environment. Anthropologists tended to focus on intrinsic models while geologists and vertebrate paleontologists tended to put greater emphasis on habitats. Alternatives to the savanna hypothesis include the woodland/forest hypothesis, which emphasizes the evolution of hominins in closed habitats, or hypotheses emphasizing the influence of colder habitats at higher latitudes or the influence of seasonal variation. More recent research has emphasized the variability selection hypothesis, which proposes that variability in climate fostered development of hominin traits. Improved climate proxies show that the Pliocene climate of east Africa was highly variable, suggesting that adaptability to varying conditions was more important in driving hominin evolution than the steady pressure of a particular habitat. Flora The change to a cooler, drier, more seasonal climate had considerable impacts on Pliocene vegetation, reducing tropical species worldwide. Deciduous forests proliferated, coniferous forests and tundra covered much of the north, and grasslands spread on all continents (except Antarctica). Eastern Africa in particular saw a huge expansion of C4 grasslands. Tropical forests were limited to a tight band around the equator, and in addition to dry savannahs, deserts appeared in Asia and Africa. Fauna Both marine and continental faunas were essentially modern, although continental faunas were a bit more primitive than today. The land mass collisions meant great migration and mixing of previously isolated species, such as in the Great American Interchange. Herbivores got bigger, as did specialized predators. Image gallery Mammals In North America, rodents, large mastodons and gomphotheres, and opossums continued successfully, while hoofed animals (ungulates) declined, with camel, deer, and horse all seeing populations recede. Three-toed horses (Nannippus), oreodonts, protoceratids, and chalicotheres became extinct. Borophagine dogs and Agriotherium became extinct, but other carnivores including the weasel family diversified, and dogs and short-faced bears did well. Ground sloths, huge glyptodonts, and armadillos came north with the formation of the Isthmus of Panama. The latitudinal diversity gradient among terrestrial North American mammals became established during this epoch some time after 4 Ma. In Eurasia rodents did well, while primate distribution declined. Elephants, gomphotheres and stegodonts were successful in Asia (the largest land mammals of the Pliocene were such proboscideans as Deinotherium, Anancus, and Mammut borsoni,) though proboscidean diversity declined significantly during the Late Pliocene. Hyraxes migrated north from Africa. Horse diversity declined, while tapirs and rhinos did fairly well. Bovines and antelopes were successful; some camel species crossed into Asia from North America. Hyenas and early saber-toothed cats appeared, joining other predators including dogs, bears, and weasels. Africa was dominated by hoofed animals, and primates continued their evolution, with australopithecines (some of the first hominins) and baboon-like monkeys such as the Dinopithecus appearing in the late Pliocene. Rodents were successful, and elephant populations increased. Cows and antelopes continued diversification and overtook pigs in numbers of species. Early giraffes appeared. Horses and modern rhinos came onto the scene. Bears, dogs and weasels (originally from North America) joined cats, hyenas and civets as the African predators, forcing hyenas to adapt as specialized scavengers. Most mustelids in Africa declined as a result of increased competition from the new predators, although Enhydriodon omoensis remained an unusually successful terrestrial predator. South America was invaded by North American species for the first time since the Cretaceous, with North American rodents and primates mixing with southern forms. Litopterns and the notoungulates, South American natives, were mostly wiped out, except for the macrauchenids and toxodonts, which managed to survive. Small weasel-like carnivorous mustelids, coatis and short-faced bears migrated from the north. Grazing glyptodonts, browsing giant ground sloths and smaller caviomorph rodents, pampatheres, and armadillos did the opposite, migrating to the north and thriving there. The marsupials remained the dominant Australian mammals, with herbivore forms including wombats and kangaroos, and the huge Diprotodon. Carnivorous marsupials continued hunting in the Pliocene, including dasyurids, the dog-like thylacine and cat-like Thylacoleo. The first rodents arrived in Australia. The modern platypus, a monotreme, appeared. Birds The predatory South American phorusrhacids were rare in this time; among the last was Titanis, a large phorusrhacid that migrated to North America and rivaled mammals as top predator. Other birds probably evolved at this time, some modern (such as the genera Cygnus, Bubo, Struthio and Corvus), some now extinct. Reptiles and amphibians Alligators and crocodiles died out in Europe as the climate cooled. Venomous snake genera continued to increase as more rodents and birds evolved. Rattlesnakes first appeared in the Pliocene. The modern species Alligator mississippiensis, having evolved in the Miocene, continued into the Pliocene, except with a more northern range; specimens have been found in very late Miocene deposits of Tennessee. Giant tortoises still thrived in North America, with genera like Hesperotestudo. Madtsoid snakes were still present in Australia. The amphibian order Allocaudata became extinct. Bivalves In the Western Atlantic, assemblages of bivalves exhibited remarkable stasis with regards to their basal metabolic rates throughout the various climatic changes of the Pliocene. Corals The Pliocene was a high water mark for species diversity among Caribbean corals. From 5 to 2 Ma, coral species origination rates were relatively high in the Caribbean, although a noticeable extinction event and drop in diversity occurred at the end of this interval. Oceans Oceans continued to be relatively warm during the Pliocene, though they continued cooling. The Arctic ice cap formed, drying the climate and increasing cool shallow currents in the North Atlantic. Deep cold currents flowed from the Antarctic. The formation of the Isthmus of Panama about 3.5 million years ago cut off the final remnant of what was once essentially a circum-equatorial current that had existed since the Cretaceous and the early Cenozoic. This may have contributed to further cooling of the oceans worldwide. The Pliocene seas were alive with sea cows, seals, sea lions, sharks and whales.
Physical sciences
Geological timescale
Earth science
23295
https://en.wikipedia.org/wiki/Printing%20press
Printing press
A printing press is a mechanical device for applying pressure to an inked surface resting upon a print medium (such as paper or cloth), thereby transferring the ink. It marked a dramatic improvement on earlier printing methods in which the cloth, paper, or other medium was brushed or rubbed repeatedly to achieve the transfer of ink and accelerated the process. Typically used for texts, the invention and global spread of the printing press was one of the most influential events in the second millennium. In Germany, around 1440, the goldsmith Johannes Gutenberg invented the movable-type printing press, which started the Printing Revolution. Modelled on the design of existing screw presses, a single Renaissance movable-type printing press could produce up to 3,600 pages per workday, compared to forty by hand-printing and a few by hand-copying. Gutenberg's newly devised hand mould made possible the precise and rapid creation of metal movable type in large quantities. His two inventions, the hand mould and the movable-type printing press, together drastically reduced the cost of printing books and other documents in Europe, particularly for shorter print runs. From Mainz, the movable-type printing press spread within several decades to over 200 cities in a dozen European countries. By 1500, printing presses in operation throughout Western Europe had already produced more than 20 million volumes. In the 16th century, with presses spreading further afield, their output rose tenfold to an estimated 150 to 200 million copies. The earliest press in the Western Hemisphere was established by Spaniards in New Spain in 1539, and by the mid-17th century, the first printing presses arrived in British colonial America in response to the increasing demand for Bibles and other religious literature. The operation of a press became synonymous with the enterprise of printing and lent its name to a new medium of expression and communication, "the press". The spread of mechanical movable type printing in Europe in the Renaissance introduced the era of mass communication, which permanently altered the structure of society. The relatively unrestricted circulation of information and (revolutionary) ideas transcended borders, captured the masses in the Reformation, and threatened the power of political and religious authorities. The sharp increase in literacy broke the monopoly of the literate elite on education and learning and bolstered the emerging middle class. Across Europe, the increasing cultural self-awareness of its peoples led to the rise of proto-nationalism and accelerated the development of European vernaculars, to the detriment of Latin's status as lingua franca. In the 19th century, the replacement of the hand-operated Gutenberg-style press by steam-powered rotary presses allowed printing on an industrial scale. History Economic conditions and intellectual climate The rapid economic and socio-cultural development of late medieval society in Europe created favorable intellectual and technological conditions for Gutenberg's improved version of the printing press: the entrepreneurial spirit of emerging capitalism increasingly made its impact on medieval modes of production, fostering economic thinking and improving the efficiency of traditional work processes. The sharp rise of medieval learning and literacy amongst the middle class led to an increased demand for books which the time-consuming hand-copying method fell far short of accommodating. Technological factors Technologies preceding the press that led to the press's invention included: manufacturing of paper, development of ink, woodblock printing, and invention of eyeglasses. At the same time, a number of medieval products and technological processes had reached a level of maturity which allowed their potential use for printing purposes. Gutenberg took up these far-flung strands, combined them into one complete and functioning system, and perfected the printing process through all its stages by adding a number of inventions and innovations of his own: The screw press which allowed direct pressure to be applied on a flat plane was already of great antiquity in Gutenberg's time and was used for a wide range of tasks. Introduced in the 1st century AD by the Romans, it was commonly employed in agricultural production for pressing grapes for wine and olives for oil, both of which formed an integral part of the Mediterranean and medieval diet. The device was also used from very early on in urban contexts as a cloth press for printing patterns. Gutenberg may have also been inspired by the paper presses which had spread through the German lands since the late 14th century and which worked on the same mechanical principles. During the Islamic Golden Age, Arab Muslims were printing texts, including passages from the Qur’an, embracing the Chinese craft of paper making, developed it and adopted it widely in the Muslim world, which led to a major increase in the production of manuscript texts. In Egypt during the Fatimid era, the printing technique was adopted reproducing texts on paper strips by hand and supplying them in various copies to meet the demand. Gutenberg adopted the basic design, thereby mechanizing the printing process. Printing, however, put a demand on the machine quite different from pressing. Gutenberg adapted the construction so that the pressing power exerted by the platen on the paper was now applied both evenly and with the required sudden elasticity. To speed up the printing process, he introduced a movable undertable with a plane surface on which the sheets could be swiftly changed. The concept of movable type existed prior to 15th century Europe; sporadic evidence that the typographical principle, the idea of creating a text by reusing individual characters, was known and had been cropping up since the 12th century and possibly before (the oldest known application dating back as far as the Phaistos disc). The first movable type was invented by Chinese engineer Bi Sheng in the 11th century during the Song dynasty, and a book dating to 1193 recorded the first copper movable type. This received limited use compared to woodblock printing. The technology spread outside China, as the oldest printed book using metal movable type was the Jikji, printed in Korea in 1377 during the Goryeo era. Other notable examples include the Prüfening inscription from Germany, letter tiles from England and Altarpiece of Pellegrino II in Italy. However, the various techniques employed (imprinting, punching and assembling individual letters) did not have the refinement and efficiency needed to become widely accepted. Tsuen-Hsuin and Needham, and Briggs and Burke suggest that the movable-type printing in China and Korea was rarely employed. Gutenberg greatly improved the process by treating typesetting and printing as two separate work steps. A goldsmith by profession, he created his type pieces from a lead-based alloy which suited printing purposes so well that it is still used today. The mass production of metal letters was achieved by his key invention of a special hand mould, the matrix. The Latin alphabet proved to be an enormous advantage in the process because, in contrast to logographic writing systems, it allowed the type-setter to represent any text with a theoretical minimum of only around two dozen different letters. Another factor conducive to printing arose from the book existing in the format of the codex, which had originated in the Roman period. Considered the most important advance in the history of the book prior to printing itself, the codex had completely replaced the ancient scroll at the onset of the Middle Ages (AD500). The codex holds considerable practical advantages over the scroll format: it is more convenient to read (by turning pages), more compact, and less costly, and both recto and verso sides could be used for writing or printing, unlike the scroll. A fourth development was the early success of medieval papermakers at mechanizing paper manufacture. The introduction of water-powered paper mills, the first certain evidence of which dates to 1282, allowed for a massive expansion of production and replaced the laborious handcraft characteristic of both Chinese and Muslim papermaking. Papermaking centres began to multiply in the late 13th century in Italy, reducing the price of paper to one-sixth of parchment and then falling further. Papermaking centers reached Germany a century later. Despite this it appears that the final breakthrough of paper depended just as much on the rapid spread of movable-type printing. Codices of parchment, which in terms of quality is superior to any other writing material, still had a substantial share in Gutenberg's edition of the 42-line Bible. After much experimentation, Gutenberg managed to overcome the difficulties which traditional water-based inks caused by soaking the paper, and found the formula for an oil-based ink suitable for high-quality printing with metal type. Function and approach A printing press, in its classical form, is a standing mechanism, ranging from long, wide, and tall. The small individual metal letters known as type would be set up by a compositor into the desired lines of text. Several lines of text would be arranged at once and were placed in a wooden frame known as a galley. Once the correct number of pages were composed, the galleys would be laid face up in a frame, also known as a forme, which itself is placed onto a flat stone, 'bed,' or 'coffin.' The text is inked using two balls, pads mounted on handles. The balls were made of dog skin leather, because it has no pores, and stuffed with sheep's wool and were inked. This ink was then applied to the text evenly. One damp piece of paper was then taken from a heap of paper and placed on the tympan. The paper was damp as this lets the type 'bite' into the paper better. Small pins hold the paper in place. The paper is now held between a frisket and tympan (two frames covered with paper or parchment). These are folded down, so that the paper lies on the surface of the inked type. The bed is rolled under the platen, using a windlass mechanism. A small rotating handle called the 'rounce' is used to do this, and the impression is made with a screw that transmits pressure through the platen. To turn the screw the long handle attached to it is turned. This is known as the bar or 'Devil's Tail.' In a well-set-up press, the springiness of the paper, frisket, and tympan caused the bar to spring back and raise the platen, the windlass turned again to move the bed back to its original position, the tympan and frisket raised and opened, and the printed sheet removed. Such presses were always worked by hand. After around 1800, iron presses were developed, some of which could be operated by steam power. The function of the printing press was described by William Skeen in 1872: Gutenberg's press Johannes Gutenberg's work on the printing press began in approximately 1436 when he partnered with Andreas Dritzehn—a man who had previously instructed in gem-cutting—and Andreas Heilmann, owner of a paper mill. However, it was not until a 1439 lawsuit against Gutenberg that an official record existed; witnesses' testimony discussed Gutenberg's types, an inventory of metals (including lead), and his type molds. Having previously worked as a professional goldsmith, Gutenberg made skillful use of the knowledge of metals he had learned as a craftsman. He was the first to make type from an alloy of lead, tin, and antimony, which was critical for producing durable type that produced high-quality printed books and proved to be much better suited for printing than all other known materials. To create these lead types, Gutenberg used what is considered one of his most ingenious inventions, a special matrix enabling the quick and precise molding of new type blocks from a uniform template. His type case is estimated to have contained around 290 separate letter boxes, most of which were required for special characters, ligatures, punctuation marks, and so forth. Gutenberg is also credited with the introduction of an oil-based ink which was more durable than the previously used water-based inks. As printing material he used both paper and vellum (high-quality parchment). In the Gutenberg Bible, Gutenberg made a trial of colour printing for a few of the page headings, present only in some copies. A later work, the Mainz Psalter of 1453, presumably designed by Gutenberg but published under the imprint of his successors Johann Fust and Peter Schöffer, had elaborate red and blue printed initials. The Printing Revolution The Printing Revolution occurred when the spread of the printing press facilitated the wide circulation of information and ideas, acting as an "agent of change" through the societies that it reached. Demand for bibles and other religious literature was one of the main drivers of the very rapid initial expansion of printing. Much later, printed literature played a major role in rallying support, and opposition, during the lead-up to the English Civil War, and later still the American and French Revolutions through newspapers, pamphlets and bulletins. The advent of the printing press brought with it issues involving censorship and freedom of the press. Mass production and spread of printed books The invention of mechanical movable type printing led to a huge increase of printing activities across Europe within only a few decades. From a single print shop in Mainz, Germany, printing had spread to no less than around 270 cities in Central, Western and Eastern Europe by the end of the 15th century. As early as 1480, there were printers active in 110 different places in Germany, Italy, France, Spain, the Netherlands, Belgium, Switzerland, England, Bohemia and Poland. From that time on, it is assumed that "the printed book was in universal use in Europe". In Italy, a center of early printing, print shops had been established in 77 cities and towns by 1500. At the end of the following century, 151 locations in Italy had seen at one time printing activities, with a total of nearly three thousand printers known to be active. Despite this proliferation, printing centres soon emerged; thus, one third of the Italian printers published in Venice. By 1500, the printing presses in operation throughout Western Europe had already produced more than twenty million copies. In the following century, their output rose tenfold to an estimated 150 to 200 million copies. European printing presses of around 1600 were capable of producing between 1,500 and 3,600 impressions per workday. By comparison, Far Eastern printing, where the back of the paper was manually rubbed to the page, did not exceed an output of forty pages per day. Of Erasmus's work, at least 750,000 copies were sold during his lifetime alone (1469–1536). In the early days of the Reformation, the revolutionary potential of bulk printing took princes and papacy alike by surprise. In the period from 1518 to 1524, the publication of books in Germany alone skyrocketed sevenfold; between 1518 and 1520, Luther's tracts were distributed in 300,000 printed copies. The rapidity of typographical text production, as well as the sharp fall in unit costs, led to the issuing of the first newspapers (see Relation) which opened up an entirely new field for conveying up-to-date information to the public. Incunable are surviving pre-16th century print works which are collected by many of the libraries in Europe and North America. Circulation of information and ideas The printing press was also a factor in the establishment of a community of scientists who could easily communicate their discoveries through the establishment of widely disseminated scholarly journals, helping to bring on the Scientific Revolution. Because of the printing press, authorship became more meaningful and profitable. It was suddenly important who had said or written what, and what the precise formulation and time of composition was. This allowed the exact citing of references, producing the rule, "One Author, one work (title), one piece of information" (Giesecke, 1989; 325). Before, the author was less important, since a copy of Aristotle made in Paris would not be exactly identical to one made in Bologna. For many works prior to the printing press, the name of the author has been entirely lost. Because the printing process ensured that the same information fell on the same pages, page numbering, tables of contents, and indices became common, though they previously had not been unknown. The process of reading also changed, gradually moving over several centuries from oral readings to silent, private reading. Over the next 200 years, the wider availability of printed materials led to a dramatic rise in the adult literacy rate throughout Europe. The printing press was an important step towards the democratization of knowledge. Within 50 or 60 years of the invention of the printing press, the entire classical canon had been reprinted and widely promulgated throughout Europe (Eisenstein, 1969; 52). More people had access to knowledge both new and old, more people could discuss these works. Book production became more commercialised, and the first copyright laws were passed. On the other hand, the printing press was criticized for allowing the dissemination of information that may have been incorrect. A second outgrowth of this popularization of knowledge was the decline of Latin as the language of most published works, to be replaced by the vernacular language of each area, increasing the variety of published works. The printed word also helped to unify and standardize the spelling and syntax of these vernaculars, in effect 'decreasing' their variability. This rise in importance of national languages as opposed to pan-European Latin is cited as one of the causes of the rise of nationalism in Europe. A third consequence of popularization of printing was on the economy. The printing press was associated with higher levels of city growth. The publication of trade-related manuals and books teaching techniques like double-entry bookkeeping increased the reliability of trade and led to the decline of merchant guilds and the rise of individual traders. Industrial printing presses At the dawn of the Industrial Revolution, the mechanics of the hand-operated Gutenberg-style press were still essentially unchanged, although new materials in its construction, amongst other innovations, had gradually improved its printing efficiency. By 1800, Lord Stanhope had built a press completely from cast iron which reduced the force required by 90%, while doubling the size of the printed area. With a capacity of 480 pages per hour, the Stanhope press doubled the output of the old style press. Nonetheless, the limitations inherent to the traditional method of printing became obvious. Two ideas altered the design of the printing press radically: First, the use of steam power for running the machinery, and second the replacement of the printing flatbed with the rotary motion of cylinders. Both elements were for the first time successfully implemented by the German printer Friedrich Koenig in a series of press designs devised between 1802 and 1818. Having moved to London in 1804, Koenig soon met Thomas Bensley and secured financial support for his project in 1807. Patented in 1810, Koenig had designed a steam press "much like a hand press connected to a steam engine." In April 1811, the first production trial of this model occurred. He produced his machine with assistance from German engineer Andreas Friedrich Bauer. In 1814, Koenig and Bauer sold two of their first models to The Times in London, capable of 1,100 impressions per hour. The first edition so printed was on 28 November 1814. They improved the early model so that it could print on both sides of a sheet at once. This began the long process of making newspapers available to a mass audience, which helped spread literacy. From the 1820s it changed the nature of book production, forcing a greater standardization in titles and other metadata. Their company Koenig & Bauer AG is still one of the world's largest manufacturers of printing presses today. Rotary press The steam-powered rotary printing press, invented in 1843 in the United States by Richard M. Hoe, ultimately allowed millions of copies of a page in a single day. Mass production of printed works flourished after the transition to rolled paper, as continuous feed allowed the presses to run at a much faster pace. Hoe's original design operated at up to 2,000 revolutions per hour where each revolution deposited 4 page images, giving the press a throughput of 8,000 pages per hour. By 1891, The New York World and Philadelphia Item were operating presses producing either 90,000 4-page sheets per hour or 48,000 8-page sheets. In the middle of the 19th century, there was a separate development of jobbing presses, small presses capable of printing small-format pieces such as billheads, letterheads, business cards, and envelopes. Jobbing presses were capable of quick setup, with an average setup time for a small job was under 15 minutes, and quick production. Even on treadle-powered jobbing presses it was considered normal to get 1,000 impressions per hour [iph] with one pressman, with speeds of 1,500 iph often attained on simple envelope work. Job printing emerged as a reasonably cost-effective duplicating solution for commerce at this time. Printing capacity The table lists the maximum number of pages which the various press designs could print per hour. Gallery
Technology
Media and communication
null
23305
https://en.wikipedia.org/wiki/POSIX
POSIX
The Portable Operating System Interface (POSIX; ) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. POSIX defines application programming interfaces (APIs), along with command line shells and utility interfaces, for software compatibility (portability) with variants of Unix and other operating systems. POSIX is also a trademark of the IEEE. POSIX is intended to be used by both application and system developers. Name Originally, the name "POSIX" referred to IEEE Std 1003.1-1988, released in 1988. The family of POSIX standards is formally designated as IEEE 1003 and the ISO/IEC standard number is ISO/IEC 9945. The standards emerged from a project that began in 1984 building on work from related activity in the /usr/group association. Richard Stallman suggested the name POSIX to the IEEE instead of the former IEEE-IX. The committee found it more easily pronounceable and memorable, and thus adopted it. Overview Unix was selected as the basis for a standard system interface partly because it was "manufacturer-neutral". However, several major versions of Unix existed—so there was a need to develop a common-denominator system. The POSIX specifications for Unix-like operating systems originally consisted of a single document for the core programming interface, but eventually grew to 19 separate documents (POSIX.1, POSIX.2, etc.). The standardized user command line and scripting interface were based on the UNIX System V shell. Many user-level programs, services, and utilities (including awk, echo, ed) were also standardized, along with required program-level services (including basic I/O: file, terminal, and network). POSIX also defines a standard threading library API which is supported by most modern operating systems. In 2008, most parts of POSIX were combined into a single standard (IEEE Std 1003.1-2008, also known as POSIX.1-2008). , POSIX documentation is divided into two parts: POSIX.1, 2013 Edition: POSIX Base Definitions, System Interfaces, and Commands and Utilities (which include POSIX.1, extensions for POSIX.1, Real-time Services, Threads Interface, Real-time Extensions, Security Interface, Network File Access and Network Process-to-Process Communications, User Portability Extensions, Corrections and Extensions, Protection and Control Utilities and Batch System Utilities. This is POSIX 1003.1-2008 with Technical Corrigendum 1.) POSIX Conformance Testing: A test suite for POSIX accompanies the standard: VSX-PCTS or the VSX POSIX Conformance Test Suite. The development of the POSIX standard takes place in the Austin Group (a joint working group among the IEEE, The Open Group, and the ISO/IEC JTC 1/SC 22/WG 15). Versions Parts before 1997 Before 1997, POSIX comprised several standards: POSIX.1: Core Services (incorporates Standard ANSI C) (IEEE Std 1003.1-1988) Process Creation and Control Signals Floating Point Exceptions Segmentation / Memory Violations Illegal Instructions Bus Errors Timers File and Directory Operations Pipes C Library (Standard C) The POSIX terminal interface POSIX.1b: Real-time extensions (IEEE Std 1003.1b-1993, later appearing as librt—the Realtime Extensions library) Priority Scheduling Real-Time Signals Clocks and Timers Semaphores Message Passing Shared Memory Asynchronous and Synchronous I/O Memory Locking Interface POSIX.1c: Threads extensions (IEEE Std 1003.1c-1995) Thread Creation, Control, and Cleanup Thread Scheduling Thread Synchronization Signal Handling POSIX.2: Shell and Utilities (IEEE Std 1003.2-1992) Command Interpreter Utility Programs Versions after 1997 After 1997, the Austin Group developed the POSIX revisions. The specifications are known under the name Single UNIX Specification, before they become a POSIX standard when formally approved by the ISO. POSIX.1-2001 (with two TCs) POSIX.1-2001 (or IEEE Std 1003.1-2001) equates to the Single UNIX Specification, version 3 minus X/Open Curses. This standard consisted of: the Base Definitions, Issue 6, the System Interfaces and Headers, Issue 6, the Commands and Utilities, Issue 6. IEEE Std 1003.1-2004 involved a minor update of POSIX.1-2001. It incorporated two minor updates or errata referred to as Technical Corrigenda (TCs). Its contents are available on the web. POSIX.1-2008 (with two TCs) Base Specifications, Issue 7 (or IEEE Std 1003.1-2008, 2016 Edition). This standard consists of: the Base Definitions, Issue 7, the System Interfaces and Headers, Issue 7, the Commands and Utilities, Issue 7, the Rationale volume. POSIX.1-2017 IEEE Std 1003.1-2017 (Revision of IEEE Std 1003.1-2008) - IEEE Standard for Information Technology—Portable Operating System Interface (POSIX(R)) Base Specifications, Issue 7 is available from either The Open Group or IEEE. It is technically identical to POSIX.1-2008 with Technical Corrigenda 1 and 2 applied. Its contents are available on the web. POSIX.1-2024 IEEE Std 1003.1-2024 - IEEE Standard for Information Technology—Portable Operating System Interface (POSIX(R)) Base Specifications, Issue 8 was published on 14 June 2024. Its contents are available on the web. Controversies 512- vs 1024-byte blocks POSIX mandates 512-byte default block sizes for the df and du utilities, reflecting the typical size of blocks on disks. When Richard Stallman and the GNU team were implementing POSIX for the GNU operating system, they objected to this on the grounds that most people think in terms of 1024 byte (or 1 KiB) blocks. The environment variable was introduced to allow the user to force the standards-compliant behaviour. The variable name was later changed to . This variable is now also used for a number of other behaviour quirks. POSIX-oriented operating systems Depending upon the degree of compliance with the standards, one can classify operating systems as fully or partly POSIX compatible. POSIX-certified Current versions of the following operating systems have been certified to conform to one or more of the various POSIX standards. This means that they passed the automated conformance tests and their certification has not expired and the operating system has not been discontinued. AIX HP-UX INTEGRITY macOS (since Mac OS X Leopard) OpenServer UnixWare VxWorks z/OS Formerly POSIX-certified Some versions of the following operating systems had been certified to conform to one or more of the various POSIX standards. This means that they passed the automated conformance tests. The certification has expired and some of the operating systems have been discontinued. EulerOS (exp. 2022) Inspur K-UX (exp. 2019) IRIX (defunct 2006) OS/390 (defunct 2004) QNX Neutrino Solaris (exp. 2019) Tru64 (defunct 2010) LiteOS (defunct 2020) Mostly POSIX-compliant The following are not certified as POSIX compliant yet comply in large part: Android (Available through Android NDK) Darwin (core of macOS and iOS) DragonFly BSD FreeBSD Haiku illumos Linux (most distributions) LynxOS Minix (now Minix 3) MPE/iX NetBSD Nucleus RTOS NuttX OpenBSD OpenSolaris PikeOS RTOS for embedded systems with optional PSE51 and PSE52 partitions; see partition (mainframe) PX5 RTOS Redox RTEMS – POSIX API support designed to IEEE Std. 1003.13-2003 PSE52 SerenityOS Stratus OpenVOS SkyOS Syllable ULTRIX VSTa VMware ESXi Xenix Zephyr POSIX for Microsoft Windows Cygwin provides a largely POSIX-compliant development and run-time environment for Microsoft Windows. MinGW, a fork of Cygwin, provides a less POSIX-compliant development environment and supports compatible C-programmed applications via Msvcrt, Microsoft's old Visual C runtime library. libunistd, a largely POSIX-compliant development library originally created to build the Linux-based C/C++ source code of CinePaint as is in Microsoft Visual Studio. A lightweight implementation that has POSIX-compatible header files that map POSIX APIs to call their Windows API counterparts. Microsoft POSIX subsystem, an optional Windows subsystem included in Windows NT-based operating systems up to Windows 2000. It supported POSIX.1 as it stood in the 1990 revision, without threads or sockets. Interix, originally OpenNT by Softway Systems, Inc., is an upgrade and replacement for Microsoft POSIX subsystem that was purchased by Microsoft in 1999. It was initially marketed as a stand-alone add-on product and then later included as a component in Windows Services for UNIX (SFU) and finally incorporated as a component in Windows Server 2003 R2 and later Windows OS releases under the name "Subsystem for UNIX-based Applications" (SUA); later made deprecated in 2012 (Windows 8) and dropped in 2013 (2012 R2, 8.1). It enables full POSIX compliance for certain Microsoft Windows products. Windows Subsystem for Linux, also known as WSL, is a compatibility layer for running Linux binary executables natively on Windows 10 and 11 using a Linux image such as Ubuntu, Debian, or OpenSUSE among others, acting as an upgrade and replacement for Windows Services for UNIX. It was released in beta in April 2016. The first distribution available was Ubuntu. UWIN from AT&T Research implements a POSIX layer on top of the Win32 APIs. MKS Toolkit, originally created for MS-DOS, is a software package produced and maintained by MKS Inc. that provides a Unix-like environment for scripting, connectivity and porting Unix and Linux software to both 32- and 64-bit Microsoft Windows systems. A subset of it was included in the first release of Windows Services for UNIX (SFU) in 1998. Windows C Runtime Library and Windows Sockets API implement commonly used POSIX API functions for file, time, environment, and socket access, although the support remains largely incomplete and not fully interoperable with POSIX-compliant implementations. POSIX for OS/2 Mostly POSIX compliant environments for OS/2: emx+gcc – largely POSIX compliant POSIX for DOS Partially POSIX compliant environments for DOS include: emx+gcc – largely POSIX compliant DJGPP – partially POSIX compliant DR-DOS multitasking core via – a POSIX threads frontend API extension is available Compliant via compatibility layer The following are not officially certified as POSIX compatible, but they conform in large part to the standards by implementing POSIX support via some sort of compatibility feature (usually translation libraries, or a layer atop the kernel). Without these features, they are usually non-compliant. AmigaOS (through the ixemul library or vbcc_PosixLib) eCos – POSIX is part of the standard distribution, and used by many applications. 'external links' section below has more information. IBM i (through the PASE compatibility layer) MorphOS (through the built-in ixemul library) OpenVMS (through optional POSIX package) Plan 9 from Bell Labs APE - ANSI/POSIX Environment RIOT (through optional POSIX module) Symbian OS with PIPS (PIPS Is POSIX on Symbian) VAXELN (partial support of 1003.1 and 1003.4 through the VAXELN POSIX runtime library) Windows NT kernel when using Microsoft SFU 3.5 or SUA Windows 2000 Server or Professional with Service Pack 3 or later. To be POSIX compliant, one must activate optional features of Windows NT and Windows 2000 Server. Windows XP Professional with Service Pack 1 or later Windows Server 2003 Windows Server 2008 and Ultimate and Enterprise versions of Windows Vista Windows Server 2008 R2 and Ultimate and Enterprise versions of Windows 7 albeit deprecated, still available for Windows Server 2012 and Enterprise version of Windows 8
Technology
Computer software
null
23310
https://en.wikipedia.org/wiki/Pleistocene
Pleistocene
The Pleistocene ( ; referred to colloquially as the Ice Age) is the geological epoch that lasted from to 11,700 years ago, spanning the Earth's most recent period of repeated glaciations. Before a change was finally confirmed in 2009 by the International Union of Geological Sciences, the cutoff of the Pleistocene and the preceding Pliocene was regarded as being 1.806 million years Before Present (BP). Publications from earlier years may use either definition of the period. The end of the Pleistocene corresponds with the end of the last glacial period and also with the end of the Paleolithic age used in archaeology. The name is a combination of Ancient Greek () 'most' and (; Latinized as ) 'new'. At the end of the preceding Pliocene, the previously isolated North and South American continents were joined by the Isthmus of Panama, causing a faunal interchange between the two regions and changing ocean circulation patterns, with the onset of glaciation in the Northern Hemisphere occurring around 2.7 million years ago. During the Early Pleistocene (2.58–0.8 Ma), archaic humans of the genus Homo originated in Africa and spread throughout Afro-Eurasia. The end of the Early Pleistocene is marked by the Mid-Pleistocene Transition, with the cyclicity of glacial cycles changing from 41,000-year cycles to asymmetric 100,000-year cycles, making the climate variation more extreme. The Late Pleistocene witnessed the spread of modern humans outside of Africa as well as the extinction of all other human species. Humans also spread to the Australian continent and the Americas for the first time, co-incident with the extinction of most large-bodied animals in these regions. The aridification and cooling trends of the preceding Neogene were continued in the Pleistocene. The climate was strongly variable depending on the glacial cycle, with the sea levels being up to lower than present at peak glaciation, allowing the connection of Asia and North America via Beringia and the covering of most of northern North America by the Laurentide Ice Sheet. Etymology Charles Lyell introduced the term "Pleistocene" in 1839 to describe strata in Sicily that had at least 70% of their molluscan fauna still living today. This distinguished it from the older Pliocene Epoch, which Lyell had originally thought to be the youngest fossil rock layer. He constructed the name "Pleistocene" ('most new' or 'newest') from the Greek πλεῖστος (pleīstos) 'most' and καινός (kainós (Latinized as cænus) 'new'). This contrasts with the immediately preceding Pliocene ("newer", from πλείων (pleíōn, "more") and kainós) and the immediately subsequent Holocene ("wholly new" or "entirely new", from ὅλος (hólos, "whole") and kainós) epoch, which extends to the present time. Dating The Pleistocene has been dated from 2.580 million (±0.005) to 11,700 years BP with the end date expressed in radiocarbon years as 10,000 carbon-14 years BP. It covers most of the latest period of repeated glaciation, up to and including the Younger Dryas cold spell. The end of the Younger Dryas has been dated to about 9700 BCE (11,700 calendar years BP). The end of the Younger Dryas is the official start of the current Holocene Epoch. Although it is considered an epoch, the Holocene is not significantly different from previous interglacial intervals within the Pleistocene. In the ICS timescale, the Pleistocene is divided into four stages or ages, the Gelasian, Calabrian, Chibanian (previously the unofficial "Middle Pleistocene"), and Upper Pleistocene (unofficially the "Tarantian"). In addition to these international subdivisions, various regional subdivisions are often used. In 2009 the International Union of Geological Sciences (IUGS) confirmed a change in time period for the Pleistocene, changing the start date from 1.806 to 2.588 million years BP, and accepted the base of the Gelasian as the base of the Pleistocene, namely the base of the Monte San Nicola GSSP. The start date has now been rounded down to 2.580 million years BP. The IUGS has yet to approve a type section, Global Boundary Stratotype Section and Point (GSSP), for the upper Pleistocene/Holocene boundary (i.e. the upper boundary). The proposed section is the North Greenland Ice Core Project ice core 75° 06' N 42° 18' W. The lower boundary of the Pleistocene Series is formally defined magnetostratigraphically as the base of the Matuyama (C2r) chronozone, isotopic stage 103. Above this point there are notable extinctions of the calcareous nannofossils: Discoaster pentaradiatus and Discoaster surculus. The Pleistocene covers the recent period of repeated glaciations. The name Plio-Pleistocene has, in the past, been used to mean the last ice age. Formerly, the boundary between the two epochs was drawn at the time when the foraminiferal species Hyalinea baltica first appeared in the marine section at La Castella, Calabria, Italy. However, the revised definition of the Quaternary, by pushing back the start date of the Pleistocene to 2.58 Ma, results in the inclusion of all the recent repeated glaciations within the Pleistocene. Radiocarbon dating is considered to be inaccurate beyond around 50,000 years ago. Marine isotope stages (MIS) derived from Oxygen isotopes are often used for giving approximate dates. Deposits Pleistocene non-marine sediments are found primarily in fluvial deposits, lakebeds, slope and loess deposits as well as in the large amounts of material moved about by glaciers. Less common are cave deposits, travertines and volcanic deposits (lavas, ashes). Pleistocene marine deposits are found primarily in shallow marine basins mostly (but with important exceptions) in areas within a few tens of kilometres of the modern shoreline. In a few geologically active areas such as the Southern California coast, Pleistocene marine deposits may be found at elevations of several hundred metres. Paleogeography and climate The modern continents were essentially at their present positions during the Pleistocene, the plates upon which they sit probably having moved no more than relative to each other since the beginning of the period. In glacial periods, the sea level would drop by up to lower than today during peak glaciation, exposing large areas of the present continental shelf as dry land. According to Mark Lynas (through collected data), the Pleistocene's overall climate could be characterised as a continuous El Niño with trade winds in the south Pacific weakening or heading east, warm air rising near Peru, warm water spreading from the west Pacific and the Indian Ocean to the east Pacific, and other El Niño markers. Glacial features Pleistocene climate was marked by repeated glacial cycles in which continental glaciers pushed to the 40th parallel in some places. It is estimated that, at maximum glacial extent, 30% of the Earth's surface was covered by ice. In addition, a zone of permafrost stretched southward from the edge of the glacial sheet, a few hundred kilometres in North America, and several hundred in Eurasia. The mean annual temperature at the edge of the ice was ; at the edge of the permafrost, . Each glacial advance tied up huge volumes of water in continental ice sheets thick, resulting in temporary sea-level drops of or more over the entire surface of the Earth. During interglacial times, such as at present, drowned coastlines were common, mitigated by isostatic or other emergent motion of some regions. The effects of glaciation were global. Antarctica was ice-bound throughout the Pleistocene as well as the preceding Pliocene. The Andes were covered in the south by the Patagonian ice cap. There were glaciers in New Zealand and Tasmania. The current decaying glaciers of Mount Kenya, Mount Kilimanjaro, and the Ruwenzori Range in east and central Africa were larger. Glaciers existed in the mountains of Ethiopia and to the west in the Atlas Mountains. In the northern hemisphere, many glaciers fused into one. The Cordilleran Ice Sheet covered the North American northwest; the east was covered by the Laurentide. The Fenno-Scandian ice sheet rested on northern Europe, including much of Great Britain; the Alpine ice sheet on the Alps. Scattered domes stretched across Siberia and the Arctic shelf. The northern seas were ice-covered. South of the ice sheets large lakes accumulated because outlets were blocked and the cooler air slowed evaporation. When the Laurentide Ice Sheet retreated, north-central North America was completely covered by Lake Agassiz. Over a hundred basins, now dry or nearly so, were overflowing in the North American west. Lake Bonneville, for example, stood where Great Salt Lake now does. In Eurasia, large lakes developed as a result of the runoff from the glaciers. Rivers were larger, had a more copious flow, and were braided. African lakes were fuller, apparently from decreased evaporation. Deserts, on the other hand, were drier and more extensive. Rainfall was lower because of the decreases in oceanic and other evaporation. It has been estimated that during the Pleistocene, the East Antarctic Ice Sheet thinned by at least 500 meters, and that thinning since the Last Glacial Maximum is less than 50 meters and probably started after ca 14 ka. Major events During the 2.5 million years of the Pleistocene, numerous cold phases called glacials (Quaternary ice age), or significant advances of continental ice sheets, in Europe and North America, occurred at intervals of approximately 40,000 to 100,000 years. The long glacial periods were separated by more temperate and shorter interglacials which lasted about 10,000–15,000 years. The last cold episode of the last glacial period ended about 10,000 years ago. Over 11 major glacial events have been identified, as well as many minor glacial events. A major glacial event is a general glacial excursion, termed a "glacial." Glacials are separated by "interglacials". During a glacial, the glacier experiences minor advances and retreats. The minor excursion is a "stadial"; times between stadials are "interstadials". These events are defined differently in different regions of the glacial range, which have their own glacial history depending on latitude, terrain and climate. There is a general correspondence between glacials in different regions. Investigators often interchange the names if the glacial geology of a region is in the process of being defined. However, it is generally incorrect to apply the name of a glacial in one region to another. For most of the 20th century, only a few regions had been studied and the names were relatively few. Today the geologists of different nations are taking more of an interest in Pleistocene glaciology. As a consequence, the number of names is expanding rapidly and will continue to expand. Many of the advances and stadials remain unnamed. Also, the terrestrial evidence for some of them has been erased or obscured by larger ones, but evidence remains from the study of cyclical climate changes. The glacials in the following tables show historical usages, are a simplification of a much more complex cycle of variation in climate and terrain, and are generally no longer used. The headings "Glacial 1" to "Glacial 4" are designations indicating the four most recent glacials, with "Glacial 4" being the most recent. These names have been abandoned in favour of numeric data because many of the correlations were found to be either inexact or incorrect and more than four major glacials have been recognised since the historical terminology was established. Corresponding to the terms glacial and interglacial, the terms pluvial and interpluvial are in use (Latin: pluvia, rain). A pluvial is a warmer period of increased rainfall; an interpluvial is of decreased rainfall. Formerly a pluvial was thought to correspond to a glacial in regions not iced, and in some cases it does. Rainfall is cyclical also. Pluvials and interpluvials are widespread. There is no systematic correspondence between pluvials to glacials, however. Moreover, regional pluvials do not correspond to each other globally. For example, some have used the term "Riss pluvial" in Egyptian contexts. Any coincidence is an accident of regional factors. Only a few of the names for pluvials in restricted regions have been stratigraphically defined. Palaeocycles The sum of transient factors acting at the Earth's surface is cyclical: climate, ocean currents and other movements, wind currents, temperature, etc. The waveform response comes from the underlying cyclical motions of the planet, which eventually drag all the transients into harmony with them. The repeated glaciations of the Pleistocene were caused by the same factors. The Mid-Pleistocene Transition, approximately one million years ago, saw a change from low-amplitude glacial cycles with a dominant periodicity of 41,000 years to asymmetric high-amplitude cycles dominated by a periodicity of 100,000 years. However, a 2020 study concluded that ice age terminations might have been influenced by obliquity since the Mid-Pleistocene Transition, which caused stronger summers in the Northern Hemisphere. Milankovitch cycles Glaciation in the Pleistocene was a series of glacials and interglacials, stadials and interstadials, mirroring periodic climate changes. The main factor at work in climate cycling is now believed to be Milankovitch cycles. These are periodic variations in regional and planetary solar radiation reaching the Earth caused by several repeating changes in the Earth's motion. The effects of Milankovitch cycles were enhanced by various positive feedbacks related to increases in atmospheric carbon dioxide concentrations and Earth's albedo. Milankovitch cycles cannot be the sole factor responsible for the variations in climate since they explain neither the long-term cooling trend over the Plio-Pleistocene nor the millennial variations in the Greenland Ice Cores known as Dansgaard-Oeschger events and Heinrich events. Milankovitch pacing seems to best explain glaciation events with periodicity of 100,000, 40,000, and 20,000 years. Such a pattern seems to fit the information on climate change found in oxygen isotope cores. Oxygen isotope ratio cycles In oxygen isotope ratio analysis, variations in the ratio of to (two isotopes of oxygen) by mass (measured by a mass spectrometer) present in the calcite of oceanic core samples is used as a diagnostic of ancient ocean temperature change and therefore of climate change. Cold oceans are richer in , which is included in the tests of the microorganisms (foraminifera) contributing the calcite. A more recent version of the sampling process makes use of modern glacial ice cores. Although less rich in than seawater, the snow that fell on the glacier year by year nevertheless contained and in a ratio that depended on the mean annual temperature. Temperature and climate change are cyclical when plotted on a graph of temperature versus time. Temperature coordinates are given in the form of a deviation from today's annual mean temperature, taken as zero. This sort of graph is based on another isotope ratio versus time. Ratios are converted to a percentage difference from the ratio found in standard mean ocean water (SMOW). The graph in either form appears as a waveform with overtones. One half of a period is a Marine isotopic stage (MIS). It indicates a glacial (below zero) or an interglacial (above zero). Overtones are stadials or interstadials. According to this evidence, Earth experienced 102 MIS stages beginning at about 2.588 Ma BP in the Early Pleistocene Gelasian. Early Pleistocene stages were shallow and frequent. The latest were the most intense and most widely spaced. By convention, stages are numbered from the Holocene, which is MIS1. Glacials receive an even number and interglacials receive an odd number. The first major glacial was MIS2-4 at about 85–11 ka BP. The largest glacials were 2, 6, 12, and 16. The warmest interglacials were 1, 5, 9 and 11. For matching of MIS numbers to named stages, see under the articles for those names. Fauna Both marine and continental faunas were essentially modern but with many more large land mammals such as Mammoths, Mastodons, Diprotodons, Smilodons, tigers, lions, Aurochs, short-faced bears, giant sloths, species within Gigantopithecus and others. Isolated landmasses such as Australia, Madagascar, New Zealand and islands in the Pacific saw the evolution of large birds and even reptiles such as the Elephant bird, moa, Haast's eagle, Quinkana, Megalania and Meiolania. The severe climatic changes during the Ice Age had major impacts on the fauna and flora. With each advance of the ice, large areas of the continents became depopulated, and plants and animals retreating southwards in front of the advancing glacier faced tremendous stress. The most severe stress resulted from drastic climatic changes, reduced living space, and curtailed food supply. A major extinction event of large mammals (megafauna), which included mammoths, mastodons, saber-toothed cats, glyptodons, the woolly rhinoceros, various giraffids, such as the Sivatherium; ground sloths, Irish elk, cave lions, cave bears, Gomphotheres, American lions, dire wolves, and short-faced bears, began late in the Pleistocene and continued into the Holocene. Neanderthals also became extinct during this period. At the end of the last ice age, cold-blooded animals, smaller mammals like wood mice, migratory birds, and swifter animals like whitetail deer had replaced the megafauna and migrated north. Late Pleistocene bighorn sheep were more slender and had longer legs than their descendants today. Scientists believe that the change in predator fauna after the late Pleistocene extinctions resulted in a change of body shape as the species adapted for increased power rather than speed. The extinctions hardly affected Africa but were especially severe in North America where native horses and camels were wiped out. Asian land mammal ages (ALMA) include Zhoukoudianian, Nihewanian, and Yushean. European land mammal ages (ELMA) include the Villafranchian, Galerian, and Aurelian North American land mammal ages (NALMA) include Blancan (4.75–1.8), Irvingtonian (1.8–0.24) and Rancholabrean (0.24–0.01) in millions of years. The Blancan extends significantly back into the Pliocene. South American land mammal ages (SALMA) include Uquian (2.5–1.5), Ensenadan (1.5–0.3) and Lujanian (0.3–0.01) in millions of years. The Uquian previously extended significantly back into the Pliocene, although the new definition places it entirely within the Pleistocene. In July 2018, a team of Russian scientists in collaboration with Princeton University announced that they had brought two female nematodes frozen in permafrost, from around 42,000 years ago, back to life. The two nematodes, at the time, were the oldest confirmed living animals on the planet. Humans The evolution of anatomically modern humans took place during the Pleistocene. At the beginning of the Pleistocene Paranthropus species were still present, as well as early human ancestors, but during the lower Palaeolithic they disappeared, and the only hominin species found in fossilic records is Homo erectus for much of the Pleistocene. Acheulean lithics appear along with Homo erectus, some 1.8 million years ago, replacing the more primitive Oldowan industry used by A. garhi and by the earliest species of Homo. The Middle Paleolithic saw more varied speciation within Homo, including the appearance of Homo sapiens about 300,000 years ago. Artifacts associated with modern human behavior are unambiguously attested starting 40,000–50,000 years ago. According to mitochondrial timing techniques, modern humans migrated from Africa after the Riss glaciation in the Middle Palaeolithic during the Eemian Stage, spreading all over the ice-free world during the late Pleistocene. A 2005 study posits that humans in this migration interbred with archaic human forms already outside of Africa by the late Pleistocene, incorporating archaic human genetic material into the modern human gene pool.
Physical sciences
Geological timescale
Earth science
23311
https://en.wikipedia.org/wiki/Pasteurization
Pasteurization
In food processing, pasteurization (also pasteurisation) is a process of food preservation in which packaged foods (e.g., milk and fruit juices) are treated with mild heat, usually to less than , to eliminate pathogens and extend shelf life. Pasteurization either destroys or deactivates microorganisms and enzymes that contribute to food spoilage or the risk of disease, including vegetative bacteria, but most bacterial spores survive the process. Pasteurization is named after the French microbiologist Louis Pasteur, whose research in the 1860s demonstrated that thermal processing would deactivate unwanted microorganisms in wine. Spoilage enzymes are also inactivated during pasteurization. Today, pasteurization is used widely in the dairy industry and other food processing industries for food preservation and food safety. By the year 1999, most liquid products were heat treated in a continuous system where heat is applied using a heat exchanger or the direct or indirect use of hot water and steam. Due to the mild heat, there are minor changes to the nutritional quality and sensory characteristics of the treated foods. Pascalization or high pressure processing (HPP) and pulsed electric field (PEF) are non-thermal processes that are also used to pasteurize foods. History Heating wine for preservation has been known in China since AD 1117, and was documented in Japan in the diary Tamonin-nikki written by a series of monks between 1478 and 1618. In 1768, research performed by the Italian priest and scientist Lazzaro Spallanzani proved that a product could be made "sterile" after thermal processing. Spallanzani boiled meat broth for one hour, sealed the container immediately after boiling, and noticed that the broth did not spoil and was free from microorganisms. In 1795, a Parisian chef and confectioner named Nicolas Appert began experimenting with ways to preserve foodstuffs, succeeding with soups, vegetables, juices, dairy products, jellies, jams, and syrups. He placed the food in glass jars, sealed them with cork and sealing wax and placed them in boiling water. In that same year, the French military offered a cash prize of 12,000 francs for a new method to preserve food. After some 14 or 15 years of experimenting, Appert submitted his invention and won the prize in January 1810. Later that year, Appert published L'Art de conserver les substances animales et végétales ("The Art of Preserving Animal and Vegetable Substances"). This was the first cookbook on modern food preservation methods. La Maison Appert , in the town of Massy, near Paris, became the first food-bottling factory in the world, preserving a variety of foods in sealed bottles. Appert's filled thick, large-mouthed glass bottles with produce of every description, ranging from beef and fowl to eggs, milk and prepared dishes. He left air space at the top of the bottle, and the cork would then be sealed firmly in the jar by using a vise. The bottle was then wrapped in canvas to protect it while it was dunked into boiling water and then boiled for as much time as Appert deemed appropriate for cooking the contents thoroughly. Appert patented his method, sometimes called appertisation in his honor. Appert's method was so simple and workable that it quickly became widespread. In 1810, the British inventor and merchant Peter Durand, also of French origin, patented his own method, but this time in a tin can, so creating the modern-day process of canning foods. In 1812, the Englishmen Bryan Donkin and John Hall purchased both patents and began producing preserves. Just a decade later, Appert's method of canning had made its way to America. Tin can production was not common until the beginning of the 20th century, partly because a hammer and chisel were needed to open cans until the invention of a can opener by Robert Yeates in 1855. A less aggressive method was developed by French chemist Louis Pasteur during an 1864 summer holiday in Arbois. To remedy the frequent acidity of the local aged wines, he found out experimentally that it is sufficient to heat a young wine to only about for a short time to kill the microbes, and that the wine could subsequently be aged without sacrificing the final quality. In honor of Pasteur, this process is known as pasteurization. Pasteurization was originally used as a way of preventing wine and beer from souring, and it would be many years before milk was pasteurized. In the United States in the 1870s, before milk was regulated, it was common for milk to contain substances intended to mask spoilage. Milk Milk is an excellent medium for microbial growth, and when it is stored at ambient temperature, bacteria and other pathogens soon proliferate. The US Centers for Disease Control (CDC) says improperly handled raw milk is responsible for nearly three times more hospitalizations than any other food-borne disease source, making it one of the world's most dangerous food products. Diseases prevented by pasteurization can include tuberculosis, brucellosis, diphtheria, scarlet fever, and Q-fever; it also kills the harmful bacteria Salmonella, Listeria, Yersinia, Campylobacter, Staphylococcus aureus, and Escherichia coli O157:H7, among others. Prior to industrialization, dairy cows were kept in urban areas to limit the time between milk production and consumption, hence the risk of disease transmission via raw milk was reduced. As urban densities increased and supply chains lengthened to the distance from country to city, raw milk (often days old) became recognized as a source of disease. For example, between 1912 and 1937, some 65,000 people died of tuberculosis contracted from consuming milk in England and Wales alone. Because tuberculosis has a long incubation period in humans, it was difficult to link unpasteurized milk consumption with the disease. In 1892, chemist Ernst Lederle experimentally inoculated milk from tuberculosis-diseased cows into guinea pigs, which caused them to develop the disease. In 1910, Lederle, then in the role of Commissioner of Health, introduced mandatory pasteurization of milk in New York City. Developed countries adopted milk pasteurization to prevent such disease and loss of life, and as a result milk is now considered a safer food. A traditional form of pasteurization by scalding and straining of cream to increase the keeping qualities of butter was practiced in Great Britain in the 18th century and was introduced to Boston in the British Colonies by 1773, although it was not widely practiced in the United States for the next 20 years. Pasteurization of milk was suggested by Franz von Soxhlet in 1886. In the early 20th century, Milton Joseph Rosenau established the standards – i.e. low-temperature, slow heating at for 20 minutes – for the pasteurization of milk while at the United States Marine Hospital Service, notably in his publication of The Milk Question (1912). States in the U.S. soon began enacting mandatory dairy pasteurization laws, with the first in 1947, and in 1973 the U.S. federal government required pasteurization of milk used in any interstate commerce. The shelf life of refrigerated pasteurized milk is greater than that of raw milk. For example, high-temperature, short-time (HTST) pasteurized milk typically has a refrigerated shelf life of two to three weeks, whereas ultra-pasteurized milk can last much longer, sometimes two to three months. When ultra-heat treatment (UHT) is combined with sterile handling and container technology (such as aseptic packaging), it can even be stored non-refrigerated for up to 9 months. According to the Centers for Disease Control, between 1998 and 2011, 79% of dairy-related disease outbreaks in the United States were due to raw milk or cheese products. They report 148 outbreaks and 2,384 illnesses (with 284 requiring hospitalization), as well as two deaths due to raw milk or cheese products during the same time period. Medical equipment Medical equipment, notably respiratory and anesthesia equipment, is often disinfected using hot water, as an alternative to chemical disinfection. The temperature is raised to 70 °C (158 °F) for 30 minutes. Pasteurization process Pasteurization is a mild heat treatment of liquid foods (both packaged and unpackaged) where products are typically heated to below . The heat treatment and cooling process are designed to inhibit a phase change of the product. The acidity of the food determines the parameters (time and temperature) of the heat treatment as well as the duration of shelf life. Parameters also take into account nutritional and sensory qualities that are sensitive to heat. In acidic foods (with pH of 4.6 or less), such as fruit juice and beer, the heat treatments are designed to inactivate enzymes (pectin methylesterase and polygalacturonase in fruit juices) and destroy spoilage microbes (yeast and lactobacillus). Due to the low pH of acidic foods, pathogens are unable to grow. The shelf-life is thereby extended several weeks. In less acidic foods (with pH greater than 4.6), such as milk and liquid eggs, the heat treatments are designed to destroy pathogens and spoilage organisms (yeast and molds). Not all spoilage organisms are destroyed under pasteurization parameters, so subsequent refrigeration is necessary. High-temperature short-time (HTST) pasteurization, such as that used for milk ( for 15 seconds) ensures safety of milk and provides a refrigerated shelf life of approximately two weeks. In ultra-high-temperature (UHT) pasteurization, milk is pasteurized at for 1–2 seconds, which provides the same level of safety, but along with the packaging, extends shelf life to three months under refrigeration. Equipment Food can be pasteurized either before or after being packaged into containers. Pasteurization of food in containers generally uses either steam or hot water. When food is packaged in glass, hot water is used to avoid cracking the glass from thermal shock. When plastic or metal packaging is used, the risk of thermal shock is low, so steam or hot water is used. Most liquid foods are pasteurized by using a continuous process that passes the food through a heating zone, a hold tube to keep it at the pasteurization temperature for the desired time, and a cooling zone, after which the product is filled into the package. Plate heat exchangers are often used for low-viscosity products such as animal milks, nut milks and juices. A plate heat exchanger is composed of many thin vertical stainless steel plates that separate the liquid from the heating or cooling medium. Shell and tube heat exchangers are often used for the pasteurization of foods that are non-Newtonian fluids, such as dairy products, tomato ketchup and baby foods. A tube heat exchanger is made up of concentric stainless steel tubes. Food passes through the inner tube or tubes, while the heating/cooling medium is circulated through the outer tube. Scraped-surface heat exchangers are a type of shell and tube which contain an inner rotating shaft having spring-loaded blades that serve to scrape away any highly viscous material that accumulates on the wall of the tube. The benefits of using a heat exchanger to pasteurize foods before packaging, versus pasteurizing foods in containers are: Higher uniformity of treatment Greater flexibility with regard to the products that can be pasteurized Higher heat transfer-efficiency Greater throughput After being heated in a heat exchanger, the product flows through a hold tube for a set period of time to achieve the required treatment. If pasteurization temperature or time is not achieved, a flow diversion valve is used to divert under-processed product back to the raw product tank. If the product is adequately processed, it is cooled in a heat exchanger, then filled. Verification Direct microbiological techniques are the ultimate measurement of pathogen contamination, but these are costly and time-consuming, which means that products have a reduced shelf-life by the time pasteurization is verified. As a result of the unsuitability of microbiological techniques, milk pasteurization efficacy is typically monitored by checking for the presence of alkaline phosphatase, which is denatured by pasteurization. Destruction of alkaline phosphatase ensures the destruction of common milk pathogens. Therefore, the presence of alkaline phosphatase is an ideal indicator of pasteurization efficacy. For liquid eggs, the effectiveness of the heat treatment is measured by the residual activity of α-amylase. Efficacy against pathogenic bacteria During the early 20th century, there was no robust knowledge of what time and temperature combinations would inactivate pathogenic bacteria in milk, and so a number of different pasteurization standards were in use. By 1943, both HTST pasteurization conditions of for 15 seconds, as well as batch pasteurization conditions of for 30 minutes, were confirmed by studies of the complete thermal death (as best as could be measured at that time) for a range of pathogenic bacteria in milk. Complete inactivation of Coxiella burnetii (which was thought at the time to cause Q fever by oral ingestion of infected milk) as well as of Mycobacterium tuberculosis (which causes tuberculosis) were later demonstrated. For all practical purposes, these conditions were adequate for destroying almost all yeasts, molds, and common spoilage bacteria and also for ensuring adequate destruction of common pathogenic, heat-resistant organisms. However, the microbiological techniques used until the 1960s did not allow for the actual reduction of bacteria to be enumerated. Demonstration of the extent of inactivation of pathogenic bacteria by milk pasteurization came from a study of surviving bacteria in milk that was heat-treated after being deliberately spiked with high levels of the most heat-resistant strains of the most significant milk-borne pathogens. The mean log10 reductions and temperatures of inactivation of the major milk-borne pathogens during a 15-second treatment are: Staphylococcus aureus > 6.7 at Yersinia enterocolitica > 6.8 at Pathogenic Escherichia coli > 6.8 at Cronobacter sakazakii > 6.7 at Listeria monocytogenes > 6.9 at Salmonella ser. Typhimurium > 6.9 at (A log10 reduction between 6 and 7 means that 1 bacterium out of 1 million (106) to 10 million (107) bacteria survive the treatment.) The Codex Alimentarius Code of Hygienic Practice for Milk notes that milk pasteurization is designed to achieve at least a 5 log10 reduction of Coxiella burnetii. The Code also notes that: "The minimum pasteurization conditions are those having bactericidal effects equivalent to heating every particle of the milk to for 15 seconds (continuous flow pasteurization) or for 30 minutes (batch pasteurization)” and that "To ensure that each particle is sufficiently heated, the milk flow in heat exchangers should be turbulent, i.e. the Reynolds number should be sufficiently high". The point about turbulent flow is important because simplistic laboratory studies of heat inactivation that use test tubes, without flow, will have less bacterial inactivation than larger-scale experiments that seek to replicate conditions of commercial pasteurization. As a precaution, modern HTST pasteurization processes must be designed with flow-rate restriction as well as divert valves which ensure that the milk is heated evenly and that no part of the milk is subject to a shorter time or a lower temperature. It is common for the temperatures to exceed by . Double pasteurization Pasteurization is not sterilization and does not kill spores. "Double" pasteurization, which involves a secondary heating process, can extend shelf life by killing spores that have germinated. The acceptance of double pasteurization varies by jurisdiction. In places where it is allowed, milk is initially pasteurized when it is collected from the farm so it does not spoil before processing. Many countries prohibit the labelling of such milk as "pasteurized" but allow it to be marked "thermized", which refers to a lower-temperature process. Effects on nutritional and sensory characteristics of foods Because of its mild heat treatment, pasteurization increases the shelf-life by a few days or weeks. However, this mild heat also means there are only minor changes to heat-labile vitamins in the foods. Milk According to a systematic review and meta-analysis, it was found that pasteurization appeared to reduce concentrations of vitamins B12 and E, but it also increased concentrations of vitamin A. However, in the review, there was only limited research regarding how much pasteurization affects A, B12, and E levels. Milk is not considered an important source of vitamins B12 or E in the North American diet, so the effects of pasteurization on the adult daily intake of these vitamins is negligible. However, milk is considered an important source of vitamin A, and because pasteurization appears to increase vitamin A concentrations in milk, the effect of milk heat treatment on this vitamin is a not a major public health concern. Results of meta-analyses reveal that pasteurization of milk leads to a significant decrease in vitamin C and folate, but milk is also not an important source of these vitamins. A significant decrease in vitamin B2 concentrations was found after pasteurization. Vitamin B2 is typically found in bovine milk at concentrations of 1.83 mg/liter. Because the recommended daily intake for adults is 1.1 mg/day, milk consumption greatly contributes to the recommended daily intake of this vitamin. With the exception of B2, pasteurization does not appear to be a concern in diminishing the nutritive value of milk because milk is often not a primary source of these studied vitamins in the North American diet. Sensory effects Pasteurization also has a small but measurable effect on the sensory attributes of the foods that are processed. In fruit juices, pasteurization may result in loss of volatile aroma compounds. Fruit juice products undergo a deaeration process prior to pasteurization that may be responsible for this loss. Deaeration also minimizes the loss of nutrients like vitamin C and carotene. To prevent the decrease in quality resulting from the loss in volatile compounds, volatile recovery, though costly, can be utilized to produce higher-quality juice products. In regard to color, the pasteurization process does not have much effect on pigments such as chlorophylls, anthocyanins, and carotenoids in plants and animal tissues. In fruit juices, polyphenol oxidase (PPO) is the main enzyme responsible for causing browning and color changes. However, this enzyme is deactivated in the deaeration step prior to pasteurization with the removal of oxygen. In milk, the color difference between pasteurized and raw milk is related to the homogenization step that takes place prior to pasteurization. Before pasteurization milk is homogenized to emulsify its fat and water-soluble components, which results in the pasteurized milk having a whiter appearance compared to raw milk. For vegetable products, color degradation is dependent on the temperature conditions and the duration of heating. Pasteurization may result in some textural loss as a result of enzymatic and non-enzymatic transformations in the structure of pectin if the processing temperatures are too high as a result. However, with mild heat treatment pasteurization, tissue softening in the vegetables that causes textural loss is not of concern as long as the temperature does not get above . Novel pasteurization methods "Pasteurizing" in the broad sense refers to any method that reduces microbes by an amount (log reduction) equivalent to Pasteur's process. Novel processes, thermal and non-thermal, have been developed to pasteurize foods as a way of reducing the effects on nutritional and sensory characteristics of foods and preventing degradation of heat-labile nutrients. Pascalization or high pressure processing (HPP), pulsed electric field (PEF), ionising radiation, high pressure homogenisation, UV decontamination, pulsed high intensity light, high intensity laser, pulsed white light, high power ultrasound, oscillating magnetic fields, high voltage arc discharge, and streamer plasma are examples of these non-thermal pasteurization methods that are currently commercially utilized. Microwave volumetric heating (MVH) is the newest available pasteurization technology. It uses microwaves to heat liquids, suspensions, or semi-solids in a continuous flow. Because MVH delivers energy evenly and deeply into the whole body of a flowing product, it allows for gentler and shorter heating, so that almost all heat-sensitive substances in the milk are preserved. Products that are commonly pasteurized Beer Canned food Dairy products Eggs Milk Juices Low alcoholic beverages Syrups Vinegar Water Wines
Technology
Food, water and health
null
23312
https://en.wikipedia.org/wiki/Penicillin
Penicillin
Penicillins (P, PCN or PEN) are a group of β-lactam antibiotics originally obtained from Penicillium moulds, principally P. chrysogenum and P. rubens. Most penicillins in clinical use are synthesised by P. chrysogenum using deep tank fermentation and then purified. A number of natural penicillins have been discovered, but only two purified compounds are in clinical use: penicillin G (intramuscular or intravenous use) and penicillin V (given by mouth). Penicillins were among the first medications to be effective against many bacterial infections caused by staphylococci and streptococci. They are still widely used today for various bacterial infections, though many types of bacteria have developed resistance following extensive use. Ten percent of the population claims penicillin allergies, but because the frequency of positive skin test results decreases by 10% with each year of avoidance, 90% of these patients can eventually tolerate penicillin. Additionally, those with penicillin allergies can usually tolerate cephalosporins (another group of β-lactam) because the immunoglobulin E (IgE) cross-reactivity is only 3%. Penicillin was discovered in 1928 by Scottish scientist Alexander Fleming as a crude extract of P. rubens. Fleming's student Cecil George Paine was the first to successfully use penicillin to treat eye infection (neonatal conjunctivitis) in 1930. The purified compound (penicillin F) was isolated in 1940 by a research team led by Howard Florey and Ernst Boris Chain at the University of Oxford. Fleming first used the purified penicillin to treat streptococcal meningitis in 1942. The 1945 Nobel Prize in Physiology or Medicine was shared by Chain, Fleming, and Florey. Several semisynthetic penicillins are effective against a broader spectrum of bacteria: these include the antistaphylococcal penicillins, aminopenicillins, and antipseudomonal penicillins. Nomenclature The term "penicillin" is defined as the natural product of Penicillium mould with antimicrobial activity. It was coined by Alexander Fleming on 7 March 1929 when he discovered the antibacterial property of Penicillium rubens. Fleming explained in his 1929 paper in the British Journal of Experimental Pathology that "to avoid the repetition of the rather cumbersome phrase 'Mould broth filtrate', the name 'penicillin' will be used." The name thus refers to the scientific name of the mould, as described by Fleming in his Nobel lecture in 1945: I have been frequently asked why I invented the name "Penicillin". I simply followed perfectly orthodox lines and coined a word which explained that the substance penicillin was derived from a plant of the genus Penicillium just as many years ago the word "Digitalin" was invented for a substance derived from the plant Digitalis. In modern usage, the term penicillin is used more broadly to refer to any β-lactam antimicrobial that contains a thiazolidine ring fused to the β-lactam core and may or may not be a natural product. Like most natural products, penicillin is present in Penicillium moulds as a mixture of active constituents (gentamicin is another example of a natural product that is an ill-defined mixture of active components). The principal active components of Penicillium are listed in the following table:Other minor active components of Penicillium include penicillin O, penicillin U1, and penicillin U6. Other named constituents of natural Penicillium, such as penicillin A, were subsequently found not to have antibiotic activity and are not chemically related to antibiotic penicillins. The precise constitution of the penicillin extracted depends on the species of Penicillium mould used and on the nutrient media used to culture the mould. Fleming's original strain of Penicillium rubens produces principally penicillin F, named after Fleming. But penicillin F is unstable, difficult to isolate, and produced by the mould in small quantities. The principal commercial strain of Penicillium chrysogenum (the Peoria strain) produces penicillin G as the principal component when corn steep liquor is used as the culture medium. When phenoxyethanol or phenoxyacetic acid are added to the culture medium, the mould produces penicillin V as the main penicillin instead. 6-Aminopenicillanic acid (6-APA) is a compound derived from penicillin G. 6-APA contains the beta-lactam core of penicillin G, but with the side chains stripped off; 6-APA is a useful precursor for manufacturing other penicillins. There are many semi-synthetic penicillins derived from 6-APA and these are in three groups: antistaphylococcal penicillins, broad-spectrum penicillins, and antipseudomonal penicillins. The semi-synthetic penicillins are all referred to as penicillins because they are all derived ultimately from penicillin G. Penicillin units One unit of penicillin G sodium is defined as 0.600 micrograms. Therefore, 2 million units (2 megaunits) of penicillin G is 1.2 g. One unit of penicillin V potassium is defined as 0.625 micrograms. Therefore 400,000 units of penicillin V is 250 mg. The use of units to prescribe penicillin is a historical accident and is largely obsolete outside of the US. Since the original penicillin was an ill-defined mixture of active compounds (an amorphous yellow powder), the potency of each batch of penicillin varied from batch to batch. It was therefore impossible to prescribe 1 g of penicillin because the activity of 1 g of penicillin from one batch would be different from the activity from another batch. After manufacture, each batch of penicillin had to be standardised against a known unit of penicillin: each glass vial was then filled with the number of units required. In the 1940s, a vial of 5,000 Oxford units was standard, but the depending on the batch, could contain anything from 15 mg to 20 mg of penicillin. Later, a vial of 1,000,000 international units became standard, and this could contain 2.5 g to 3 g of natural penicillin (a mixture of penicillin I, II, III, and IV and natural impurities). With the advent of pure penicillin G preparations (a white crystalline powder), there is little reason to prescribe penicillin in units. The "unit" of penicillin has had three previous definitions, and each definition was chosen as being roughly equivalent to the previous one. Oxford or Florey unit (1941). This was originally defined as the minimum amount of penicillin dissolved in 50 ml of meat extract that would inhibit the growth of a standard strain of Staphylococcus aureus (the Oxford Staphylococcus). The reference standard was a large batch of impure penicillin kept in Oxford. The assay was later modified by Florey's group to a more reproducible "cup assay": in this assay, a penicillin solution was defined to contain one unit/ml of penicillin when 339 microlitres of the solution placed in a "cup" on a plate of solid agar produced a 24 millimetre zone of inhibition of growth of Oxford Staphylococcus. First International Standard (1944). A single 8 gram batch of pure crystalline penicillin G sodium was stored at The National Institute for Medical Research at Mill Hill in London (the International Standard). One penicillin unit was defined at 0.6 micrograms of the International Standard. An impure "working standard" was also defined and was available in much larger quantities distributed around the world: one unit of the working standard was 2.7 micrograms (the amount per unit was much larger because of the impurities). At the same time, the cup assay was refined, where instead of specifying a zone diameter of 24 mm, the zone size were instead plotted against a reference curve to provide a readout on potency. Second International Standard (1953). A single 30 gram batch of pure crystalline penicillin G sodium was obtained: this was also stored at Mill Hill. One penicillin unit was defined as 0.5988 micrograms of the Second International Standard. There is an older unit for penicillin V that is not equivalent to the current penicillin V unit. The reason is that the US FDA incorrectly assumed that the potency of penicillin V is the same mole-for-mole as penicillin G. In fact, penicillin V is less potent than penicillin G, and the current penicillin V unit reflects that fact. First international unit of penicillin V (1959). One unit of penicillin V was defined as 0.590 micrograms of a reference standard held at Mill Hill in London. This unit is now obsolete. A similar standard was also established for penicillin K. Types Penicillins consist of a distinct 4-membered beta-lactam ring, in addition to a thiazolide ring and an R side chain. The main distinguishing feature between variants within this family is the R substituent. This side chain is connected to the 6-aminopenicillanic acid residue and results in variations in the antimicrobial spectrum, stability, and susceptibility to beta-lactamases of each type. Natural penicillins Penicillin G (benzylpenicillin) was first produced from a penicillium fungus that occurs in nature. The strain of fungus used today for the manufacture of penicillin G was created by genetic engineering to improve the yield in the manufacturing process. None of the other natural penicillins (F, K, N, X, O, U1 or U6) are currently in clinical use. Semi-synthetic penicillin Penicillin V (phenoxymethylpenicillin) is produced by adding the precursor phenoxyacetic acid to the medium in which a genetically modified strain of the penicillium fungus is being cultured. Antibiotics created from 6-APA There are three major groups of other semi-synthetic antibiotics related to the penicillins. They are synthesised by adding various side-chains to the precursor 6-APA, which is isolated from penicillin G. These are the antistaphylococcal antibiotics, broad-spectrum antibiotics, and antipseudomonal antibiotics. Antistaphylococcal antibiotics Cloxacillin (by mouth or by injection) Dicloxacillin (by mouth or by injection) Flucloxacillin (by mouth or by injection) Methicillin (injection only) Nafcillin (injection only) Oxacillin (by mouth or by injection) Antistaphylococcal antibiotics are so-called because they are resistant to being broken down by staphylococcal penicillinase. They are also, therefore, referred to as being penicillinase-resistant. Broad-spectrum antibiotics This group of antibiotics is called "broad-spectrum" because they are active against a wide range of Gram-negative bacteria such as Escherichia coli and Salmonella typhi, for which penicillin is not suitable. However, resistance in these organisms is now common. Ampicillin Amoxicillin There are many ampicillin precursors in existence. These are inactive compounds that are broken down in the gut to release ampicillin. None of these pro-drugs of ampicillin are in current use: Pivampicillin (pivaloyloxymethyl ester of ampicillin) Bacampicillin Metampicillin (formaldehyde ester of ampicillin) Talampicillin Hetacillin (ampicillin conjugated to acetone) Epicillin is an aminopenicillin that has never seen widespread clinical use. Antipseudomonal antibiotics The Gram-negative species, Pseudomonas aeruginosa, is naturally resistant to many antibiotic classes. There were many efforts in the 1960s and 1970s to develop antibiotics that are active against Pseudomonas species. There are two chemical classes within the group: carboxypenicillins and ureidopenicillins. All are given by injection: none can be given by mouth. Carboxypenicillins Carbenicillin Ticarcillin Temocillin Ureidopenicillins Mezlocillin Piperacillin Azlocillin β-lactamase inhibitors Clavulanic acid Sulbactam Tazobactam Medical usage The term "penicillin", when used by itself, may refer to either of two chemical compounds, penicillin G or penicillin V. Penicillin G Penicillin G is destroyed by stomach acid, so it cannot be taken by mouth, but doses as high as 2.4 g can be given (much higher than penicillin V). It is given by intravenous or intramuscular injection. It can be formulated as an insoluble salt, and there are two such formulations in current use: procaine penicillin and benzathine benzylpenicillin. When a high concentration in the blood must be maintained, penicillin G must be administered at relatively frequent intervals, because it is eliminated quite rapidly from the bloodstream by the kidney. Penicillin G is licensed for use to treat septicaemia, empyema, pneumonia, pericarditis, endocarditis and meningitis caused by susceptible strains of staphylococci and streptococci. It is also licensed for the treatment of anthrax, actinomycosis, cervicofacial disease, thoracic and abdominal disease, clostridial infections, botulism, gas gangrene (with accompanying debridement and/or surgery as indicated), tetanus (as an adjunctive therapy to human tetanus immune globulin), diphtheria (as an adjunctive therapy to antitoxin and for the prevention of the carrier state), erysipelothrix endocarditis, fusospirochetosis (severe infections of the oropharynx, lower respiratory tract and genital area), Listeria infections, meningitis, endocarditis, Pasteurella infections including bacteraemia and meningitis, Haverhill fever; rat-bite fever and disseminated gonococcal infections, meningococcal meningitis and/or septicaemia caused by penicillin-susceptible organisms and syphilis. Penicillin V Penicillin V can be taken by mouth because it is relatively resistant to stomach acid. Doses higher than 500 mg are not fully effective because of poor absorption. It is used for the same bacterial infections as those of penicillin G and is the most widely used form of penicillin. However, it is not used for diseases, such as endocarditis, where high blood levels of penicillin are required. Bacterial susceptibility Because penicillin resistance is now so common, other antibiotics are now the preferred choice for treatments. For example, penicillin used to be the first-line treatment for infections with Neisseria gonorrhoeae and Neisseria meningitidis, but it is no longer recommended for treatment of these infections. Penicillin resistance is now very common in Staphylococcus aureus, which means penicillin should not be used to treat infections caused by S. aureus infection unless the infecting strain is known to be susceptible. Side effects Common (≥ 1% of people) adverse drug reactions associated with use of the penicillins include diarrhoea, hypersensitivity, nausea, rash, neurotoxicity, urticaria, and superinfection (including candidiasis). Infrequent adverse effects (0.1–1% of people) include fever, vomiting, erythema, dermatitis, angioedema, seizures (especially in people with epilepsy), and pseudomembranous colitis. Penicillin can also induce serum sickness or a serum sickness-like reaction in some individuals. Serum sickness is a type III hypersensitivity reaction that occurs one to three weeks after exposure to drugs including penicillin. It is not a true drug allergy, because allergies are type I hypersensitivity reactions, but repeated exposure to the offending agent can result in an anaphylactic reaction. Allergy will occur in 1–10% of people, presenting as a skin rash after exposure. IgE-mediated anaphylaxis will occur in approximately 0.01% of patients. Pain and inflammation at the injection site are also common for parenterally administered benzathine benzylpenicillin, benzylpenicillin, and, to a lesser extent, procaine benzylpenicillin. The condition is known as livedoid dermatitis or Nicolau syndrome. Structure The term "penam" is used to describe the common core skeleton of a member of the penicillins. This core has the molecular formula R-C9H11N2O4S, where R is the variable side chain that differentiates the penicillins from one another. The penam core has a molar mass of 243 g/mol, with larger penicillins having molar mass near 450—for example, cloxacillin has a molar mass of 436 g/mol. 6-APA (C8H12N2O3S) forms the basic structure of penicillins. It is made up of an enclosed dipeptide formed by the condensation of and . This results in the formations of β-lactam and thiazolidinic rings. The key structural feature of the penicillins is the four-membered β-lactam ring; this structural moiety is essential for penicillin's antibacterial activity. The β-lactam ring is itself fused to a five-membered thiazolidine ring. The fusion of these two rings causes the β-lactam ring to be more reactive than monocyclic β-lactams because the two fused rings distort the β-lactam amide bond and therefore remove the resonance stabilisation normally found in these chemical bonds. An acyl side side chain attached to the β-lactam ring. A variety of β-lactam antibiotics have been produced following chemical modification from the 6-APA structure during synthesis, specifically by making chemical substitutions in the acyl side chain. For example, the first chemically altered penicillin, methicillin, had substitutions by methoxy groups at positions 2’ and 6’ of the 6-APA benzene ring from penicillin G. This difference makes methicillin resistant to the activity of β-lactamase, an enzyme by which many bacteria are naturally unsusceptible to penicillins. Pharmacology Entry into bacteria Penicillin can easily enter bacterial cells in the case of Gram-positive species. This is because Gram-positive bacteria do not have an outer cell membrane and are simply enclosed in a thick cell wall. Penicillin molecules are small enough to pass through the spaces of glycoproteins in the cell wall. For this reason Gram-positive bacteria are very susceptible to penicillin (as first evidenced by the discovery of penicillin in 1928). Penicillin, or any other molecule, enters Gram-negative bacteria in a different manner. The bacteria have thinner cell walls but the external surface is coated with an additional cell membrane, called the outer membrane. The outer membrane is a lipid layer (lipopolysaccharide chain) that blocks passage of water-soluble (hydrophilic) molecules like penicillin. It thus acts as the first line of defence against any toxic substance, which is the reason for relative resistance to antibiotics compared to Gram-positive species. But penicillin can still enter Gram-negative species by diffusing through aqueous channels called porins (outer membrane proteins), which are dispersed among the fatty molecules and can transport nutrients and antibiotics into the bacteria. Porins are large enough to allow diffusion of most penicillins, but the rate of diffusion through them is determined by the specific size of the drug molecules. For instance, penicillin G is large and enters through porins slowly; while smaller ampicillin and amoxicillin diffuse much faster. In contrast, large vancomycin can not pass through porins and is thus ineffective for Gram-negative bacteria. The size and number of porins are different in different bacteria. As a result of the two factors—size of penicillin and porin—Gram-negative bacteria can be unsusceptible or have varying degree of susceptibility to specific penicillin. Mechanism of action Penicillin kills bacteria by inhibiting the completion of the synthesis of peptidoglycans, the structural component of the bacterial cell wall. It specifically inhibits the activity of enzymes that are needed for the cross-linking of peptidoglycans during the final step in cell wall biosynthesis. It does this by binding to penicillin binding proteins with the β-lactam ring, a structure found on penicillin molecules. This causes the cell wall to weaken due to fewer cross-links and means water uncontrollably flows into the cell because it cannot maintain the correct osmotic gradient. This results in cell lysis and death. Bacteria constantly remodel their peptidoglycan cell walls, simultaneously building and breaking down portions of the cell wall as they grow and divide. During the last stages of peptidoglycan biosynthesis, uridine diphosphate-N-acetylmuramic acid pentapeptide (UDP-MurNAc) is formed in which the fourth and fifth amino acids are both D-alanyl-D-alanine. The transfer of D-alanine is done (catalysed) by the enzyme DD-transpeptidase (penicillin-binding proteins are such type). The structural integrity of bacterial cell wall depends on the cross linking of UDP-MurNAc and N-acetyl glucosamine. Penicillin and other β-lactam antibiotics act as an analogue of D-alanine-D-alanine (the dipeptide) in UDP-MurNAc owing to conformational similarities. The DD-transpeptidase then binds the four-membered β-lactam ring of penicillin instead of UDP-MurNAc. As a consequence, DD-transpeptidase is inactivated, the formation of cross-links between UDP-MurNAc and N-acetyl glucosamine is blocked so that an imbalance between cell wall production and degradation develops, causing the cell to rapidly die. The enzymes that hydrolyze the peptidoglycan cross-links continue to function, even while those that form such cross-links do not. This weakens the cell wall of the bacterium, and osmotic pressure becomes increasingly uncompensated—eventually causing cell death (cytolysis). In addition, the build-up of peptidoglycan precursors triggers the activation of bacterial cell wall hydrolases and autolysins, which further digest the cell wall's peptidoglycans. The small size of the penicillins increases their potency, by allowing them to penetrate the entire depth of the cell wall. This is in contrast to the glycopeptide antibiotics vancomycin and teicoplanin, which are both much larger than the penicillins. Gram-positive bacteria are called protoplasts when they lose their cell walls. Gram-negative bacteria do not lose their cell walls completely and are called spheroplasts after treatment with penicillin. Penicillin shows a synergistic effect with aminoglycosides, since the inhibition of peptidoglycan synthesis allows aminoglycosides to penetrate the bacterial cell wall more easily, allowing their disruption of bacterial protein synthesis within the cell. This results in a lowered MBC for susceptible organisms. Penicillins, like other β-lactam antibiotics, block not only the division of bacteria, including cyanobacteria, but also the division of cyanelles, the photosynthetic organelles of the glaucophytes, and the division of chloroplasts of bryophytes. In contrast, they have no effect on the plastids of the highly developed vascular plants. This supports the endosymbiotic theory of the evolution of plastid division in land plants. Some bacteria produce enzymes that break down the β-lactam ring, called β-lactamases, which make the bacteria resistant to penicillin. Therefore, some penicillins are modified or given with other drugs for use against antibiotic-resistant bacteria or in immunocompromised patients. The use of clavulanic acid or tazobactam, β-lactamase inhibitors, alongside penicillin gives penicillin activity against β-lactamase-producing bacteria. β-Lactamase inhibitors irreversibly bind to β-lactamase preventing it from breaking down the beta-lactam rings on the antibiotic molecule. Alternatively, flucloxacillin is a modified penicillin that has activity against β-lactamase-producing bacteria due to an acyl side chain that protects the beta-lactam ring from β-lactamase. Pharmacokinetics Penicillin has low protein binding in plasma. The bioavailability of penicillin depends on the type: penicillin G has low bioavailability, below 30%, whereas penicillin V has higher bioavailability, between 60 and 70%. Penicillin has a short half-life and is excreted via the kidneys. This means it must be dosed at least four times a day to maintain adequate levels of penicillin in the blood. Early manuals on the use of penicillin, therefore, recommended injections of penicillin as frequently as every three hours, and dosing penicillin has been described as being similar to trying to fill a bath with the plug out. This is no longer required since much larger doses of penicillin are cheaply and easily available; however, some authorities recommend the use of continuous penicillin infusions for this reason. Resistance When Alexander Fleming discovered the crude penicillin in 1928, one important observation he made was that many bacteria were not affected by penicillin. This phenomenon was realised by Ernst Chain and Edward Abraham while trying to identify the exact of penicillin. In 1940, they discovered that unsusceptible bacteria like Escherichia coli produced specific enzymes that can break down penicillin molecules, thus making them resistant to the antibiotic. They named the enzyme penicillinase. Penicillinase is now classified as member of enzymes called β-lactamases. These β-lactamases are naturally present in many other bacteria, and many bacteria produce them upon constant exposure to antibiotics. In most bacteria, resistance can be through three different mechanisms – reduced permeability in bacteria, reduced binding affinity of the penicillin-binding proteins (PBPs) or destruction of the antibiotic through the expression of β-lactamase. Using any of these, bacteria commonly develop resistance to different antibiotics, a phenomenon called multi-drug resistance. The actual process of resistance mechanism can be very complex. In case of reduced permeability in bacteria, the mechanisms are different between Gram-positive and Gram-negative bacteria. In Gram-positive bacteria, blockage of penicillin is due to changes in the cell wall. For example, resistance to vancomycin in S. aureus is due to additional peptidoglycan synthesis that makes the cell wall much thicker preventing effective penicillin entry. Resistance in Gram-negative bacteria is due to mutational variations in the structure and number of porins. In bacteria like Pseudomonas aeruginosa, there is reduced number of porins; whereas in bacteria like Enterobacter species, Escherichia coli and Klebsiella pneumoniae, there are modified porins such as non-specific porins (such as OmpC and OmpF groups) that cannot transport penicillin. Resistance due to PBP alterations is highly varied. A common case is found in Streptococcus pneumoniae where there is mutation in the gene for PBP, and the mutant PBPs have decreased binding affinity for penicillins. There are six mutant PBPs in S. pneumoniae, of which PBP1a, PBP2b, PBP2x and sometimes PBP2a are responsible for reduced binding affinity. S. aureus can activate a hidden gene that produces a different PBP, PBD2, which has low binding affinity for penicillins. There is a different strain of S. aureus named methicillin-resistant S. aureus (MRSA) which is resistant not only to penicillin and other β-lactams, but also to most antibiotics. The bacterial strain developed after introduction of methicillin in 1959. In MRSA, mutations in the genes (mec system) for PBP produce a variant protein called PBP2a (also termed PBP2'), while making four normal PBPs. PBP2a has poor binding affinity for penicillin and also lacks glycosyltransferase activity required for complete peptidoglycan synthesis (which is carried out by the four normal PBPs). In Helicobacter cinaedi, there are multiple mutations in different genes that make PBP variants. Enzymatic destruction by β-lactamases is the most important mechanism of penicillin resistance, and is described as "the greatest threat to the usage [of penicillins]". It was the first discovered mechanism of penicillin resistance. During the experiments when purification and biological activity tests of penicillin were performed in 1940, it was found that E. coli was unsusceptible. The reason was discovered as production of an enzyme penicillinase (hence, the first β-lactamase known) in E. coli that easily degraded penicillin. There are over 2,000 types of β-lactamases each of which has unique amino acid sequence, and thus, enzymatic activity. All of them are able to hydrolyse β-lactam rings but their exact target sites are different. They are secreted on the bacterial surface in large quantities in Gram-positive bacteria but less so in Gram-negative species. Therefore, in a mixed bacterial infection, the Gram-positive bacteria can protect the otherwise penicillin-susceptible Gram-negative cells. There are unusual mechanisms in P. aeruginosa, in which there can be biofilm-mediated resistance and formation of multidrug-tolerant persister cells. History Discovery Starting in the late 19th century there had been reports of the antibacterial properties of Penicillium mould, but scientists were unable to discern what process was causing the effect. Scottish physician Alexander Fleming at St. Mary's Hospital in London (now part of Imperial College) was the first to show that Penicillium rubens had antibacterial properties. On 3 September 1928 he observed by chance that fungal contamination of a bacterial culture (Staphylococcus aureus) appeared to kill the bacteria. He confirmed this observation with a new experiment on 28 September 1928. He published his experiment in 1929, and called the antibacterial substance (the fungal extract) penicillin. C. J. La Touche identified the fungus as Penicillium rubrum (later reclassified by Charles Thom as P. notatum and P. chrysogenum, but later corrected as P. rubens). Fleming expressed initial optimism that penicillin would be a useful antiseptic, because of its high potency and minimal toxicity in comparison to other antiseptics of the day, and noted its laboratory value in the isolation of Bacillus influenzae (now called Haemophilus influenzae). Fleming did not convince anyone that his discovery was important. This was largely because penicillin was so difficult to isolate that its development as a drug seemed impossible. It is speculated that had Fleming been more successful at making other scientists interested in his work, penicillin would possibly have been developed years earlier. The importance of his work has been recognized by the placement of an International Historic Chemical Landmark at the Alexander Fleming Laboratory Museum in London on 19 November 1999. Development and medical application In 1930, Cecil George Paine, a pathologist at the Royal Infirmary in Sheffield, successfully treated ophthalmia neonatorum, a gonococcal infection in infants, with penicillin (fungal extract) on November 25, 1930. In 1940, Australian scientist Howard Florey (later Baron Florey) and a team of researchers (Ernst Chain, Edward Abraham, Arthur Duncan Gardner, Norman Heatley, Margaret Jennings, Jean Orr-Ewing and Arthur Gordon Sanders) at the Sir William Dunn School of Pathology, University of Oxford made progress in making concentrated penicillin from fungal culture broth that showed both in vitro and in vivo bactericidal action. In 1941, they treated a policeman, Albert Alexander, with a severe face infection; his condition improved, but then supplies of penicillin ran out and he died. Subsequently, several other patients were treated successfully. In December 1942, survivors of the Cocoanut Grove fire in Boston were the first burn patients to be successfully treated with penicillin. The first successful use of pure penicillin was in 1942 when Fleming cured Harry Lambert of an infection of the nervous system (streptococcal meningitis) which would otherwise have been fatal. By that time the Oxford team could produce only a small amount. Florey willingly gave the only available sample to Fleming. Lambert showed improvement from the very next day of the treatment, and was completely cured within a week. Fleming published his clinical trial in The Lancet in 1943. Following the medical breakthrough, the British War Cabinet set up the Penicillin Committee on 5 April 1943 that led to projects for mass production. Mass production As the medical application was established, the Oxford team found that it was impossible to produce usable amounts in their laboratory. Failing to persuade the British government, Florey and Heatley travelled to the US in June 1941 with their mould samples in order to interest the US government for large-scale production. They approached the USDA Northern Regional Research Laboratory (NRRL, now the National Center for Agricultural Utilization Research) at Peoria, Illinois, where facilities for large-scale fermentations were established. Mass culture of the mould and search for better moulds immediately followed. On March 14, 1942, the first patient was treated for streptococcal sepsis with US-made penicillin produced by Merck & Co. Half of the total supply produced at the time was used on that one patient, Anne Miller. By June 1942, just enough US penicillin was available to treat ten patients. In July 1943, the War Production Board drew up a plan for the mass distribution of penicillin stocks to Allied troops fighting in Europe. The results of fermentation research on corn steep liquor at the NRRL allowed the United States to produce 2.3 million doses in time for the invasion of Normandy in the spring of 1944. After a worldwide search in 1943, a mouldy cantaloupe in a Peoria, Illinois market was found to contain the best strain of mould for production using the corn steep liquor process. Six times as much penicillin could be produced compared to using Fleming's mold. Pfizer scientist Jasper H. Kane suggested using a deep-tank fermentation method for producing large quantities of pharmaceutical-grade penicillin. Large-scale production resulted from the development of a deep-tank fermentation plant by chemical engineer Margaret Hutchinson Rousseau. As a direct result of the war and the War Production Board, by June 1945, over 646 billion units per year were being produced. G. Raymond Rettew made a significant contribution to the American war effort by his techniques to produce commercial quantities of penicillin, wherein he combined his knowledge of mushroom spawn with the function of the Sharples Cream Separator. By 1943, Rettew's lab was producing most of the world's penicillin. During World War II, penicillin made a major difference in the number of deaths and amputations caused by infected wounds among Allied forces, saving an estimated 12–15% of lives. Availability was severely limited, however, by the difficulty of manufacturing large quantities of penicillin and by the rapid renal clearance of the drug, necessitating frequent dosing. Methods for mass production of penicillin were patented by Andrew Jackson Moyer in 1945. Florey had not patented penicillin, having been advised by Sir Henry Dale that doing so would be unethical. Penicillin is actively excreted, and about 80% of a penicillin dose is cleared from the body within three to four hours of administration. Indeed, during the early penicillin era, the drug was so scarce and so highly valued that it became common to collect the urine from patients being treated, so that the penicillin in the urine could be isolated and reused. This was not a satisfactory solution, so researchers looked for a way to slow penicillin excretion. They hoped to find a molecule that could compete with penicillin for the organic acid transporter responsible for excretion, such that the transporter would preferentially excrete the competing molecule and the penicillin would be retained. The uricosuric agent probenecid proved to be suitable. When probenecid and penicillin are administered together, probenecid competitively inhibits the excretion of penicillin, increasing penicillin's concentration and prolonging its activity. Eventually, the advent of mass-production techniques and semi-synthetic penicillins resolved the supply issues, so this use of probenecid declined. Probenecid is still useful, however, for certain infections requiring particularly high concentrations of penicillins. After World War II, Australia was the first country to make the drug available for civilian use. In the U.S., penicillin was made available to the general public on March 15, 1945. Fleming, Florey, and Chain shared the 1945 Nobel Prize in Physiology or Medicine for the development of penicillin. Structure determination and total synthesis The chemical structure of penicillin was first proposed by Edward Abraham in 1942 and was later confirmed in 1945 using X-ray crystallography by Dorothy Crowfoot Hodgkin, who was also working at Oxford. She later in 1964 received the Nobel Prize for Chemistry for this and other structure determinations. Chemist John C. Sheehan at the Massachusetts Institute of Technology (MIT) completed the first chemical synthesis of penicillin in 1957. Sheehan had started his studies into penicillin synthesis in 1948, and during these investigations developed new methods for the synthesis of peptides, as well as new protecting groups—groups that mask the reactivity of certain functional groups. Although the initial synthesis developed by Sheehan was not appropriate for mass production of penicillins, one of the intermediate compounds in Sheehan's synthesis was 6-aminopenicillanic acid (6-APA), the nucleus of penicillin. 6-APA was discovered by researchers at the Beecham Research Laboratories (later the Beecham Group) in Surrey in 1957 (published in 1959). Attaching different groups to the 6-APA 'nucleus' of penicillin allowed the creation of new forms of penicillins which are more versatile and better in activity. Developments from penicillin The narrow range of treatable diseases or "spectrum of activity" of the penicillins, along with the poor activity of the orally active phenoxymethylpenicillin, led to the search for derivatives of penicillin that could treat a wider range of infections. The isolation of 6-APA, the nucleus of penicillin, allowed for the preparation of semisynthetic penicillins, with various improvements over benzylpenicillin (bioavailability, spectrum, stability, tolerance). The first major development was ampicillin in 1961. It offered a broader spectrum of activity than either of the original penicillins. Further development yielded β-lactamase-resistant penicillins, including flucloxacillin, dicloxacillin, and methicillin. These were significant for their activity against β-lactamase-producing bacterial species, but were ineffective against the methicillin-resistant Staphylococcus aureus (MRSA) strains that subsequently emerged. Another development of the line of true penicillins was the antipseudomonal penicillins, such as carbenicillin, ticarcillin, and piperacillin, useful for their activity against Gram-negative bacteria. However, the usefulness of the β-lactam ring was such that related antibiotics, including the mecillinams, the carbapenems, and, most important, the cephalosporins, still retain it at the center of their structures. Production Penicillin is produced by the fermentation of various types of sugar by the fungus Penicillium rubens. The fermentation process produces penicillin as a secondary metabolite when the growth of the fungus is inhibited by stress. The biosynthetic pathway outlined below experiences feedback inhibition involving the by-product -lysine inhibiting the enzyme homocitrate synthase. α-ketoglutarate + AcCoA → homocitrate → L-α-aminoadipic acid → L-lysine + β-lactam The Penicillium cells are grown using a technique called fed-batch culture, in which the cells are constantly subjected to stress, which is required for induction of penicillin production. While the usage of glucose as a carbon source represses penicillin biosynthesis enzymes, lactose does not exert any effect and alkaline pH levels override this regulation. Excess phosphate, available oxygen, and usage of ammonium as a nitrogen source repress penicillin production, while methionine can act as a sole nitrogen/sulfur source with stimulating effects. The biotechnological method of directed evolution has been applied to produce by mutation a large number of Penicillium strains. These techniques include error-prone PCR, DNA shuffling, ITCHY, and strand-overlap PCR. Biosynthesis The biosynthetic gene cluster for penicillin was first cloned and sequenced in 1990. Overall, there are three main and important steps to the biosynthesis of penicillin G (benzylpenicillin). The first step is the condensation of three amino acids—L-α-aminoadipic acid, L-cysteine, L-valine into a tripeptide. Before condensing into the tripeptide, the amino acid L-valine must undergo epimerization to become D-valine. The condensed tripeptide is named δ-(L-α-aminoadipyl)-L-cysteine-D-valine (ACV). The condensation reaction and epimerization are both catalyzed by the enzyme δ-(L-α-aminoadipyl)-L-cysteine-D-valine synthetase (ACVS), a nonribosomal peptide synthetase or NRPS. The second step in the biosynthesis of penicillin G is the oxidative conversion of linear ACV into the bicyclic intermediate isopenicillin N by isopenicillin N synthase (IPNS), which is encoded by the gene pcbC. Isopenicillin N is a very weak intermediate, because it does not show strong antibiotic activity. The final step is a transamidation by isopenicillin N N-acyltransferase, in which the α-aminoadipyl side-chain of isopenicillin N is removed and exchanged for a phenylacetyl side-chain. This reaction is encoded by the gene penDE, which is unique in the process of obtaining penicillins.
Biology and health sciences
Drugs and pharmacology
null
23315
https://en.wikipedia.org/wiki/Physician
Physician
A physician, medical practitioner (British English), medical doctor, or simply doctor is a health professional who practices medicine, which is concerned with promoting, maintaining or restoring health through the study, diagnosis, prognosis and treatment of disease, injury, and other physical and mental impairments. Physicians may focus their practice on certain disease categories, types of patients, and methods of treatment—known as specialities—or they may assume responsibility for the provision of continuing and comprehensive medical care to individuals, families, and communities—known as general practice. Medical practice properly requires both a detailed knowledge of the academic disciplines, such as anatomy and physiology, underlying diseases, and their treatment, which is the science of medicine, and a decent competence in its applied practice, which is the art or craft of the profession. Both the role of the physician and the meaning of the word itself vary around the world. Degrees and other qualifications vary widely, but there are some common elements, such as medical ethics requiring that physicians show consideration, compassion, and benevolence for their patients. Modern meanings Specialist in internal medicine Around the world, the term physician refers to a specialist in internal medicine or one of its many sub-specialties (especially as opposed to a specialist in surgery). This meaning of physician conveys a sense of expertise in treatment by drugs or medications, rather than by the procedures of surgeons. This term is at least nine hundred years old in English: physicians and surgeons were once members of separate professions, and traditionally were rivals. The Shorter Oxford English Dictionary, third edition, gives a Middle English quotation making this contrast, from as early as 1400: "O Lord, whi is it so greet difference betwixe a cirugian and a physician." Henry VIII granted a charter to the London Royal College of Physicians in 1518. It was not until 1540 that he granted the Company of Barber-Surgeons (ancestor of the Royal College of Surgeons) its separate charter. In the same year, the English monarch established the Regius Professorship of Physic at the University of Cambridge. Newer universities would probably describe such an academic as a professor of internal medicine. Hence, in the 16th century, physic meant roughly what internal medicine does now. Currently, a specialist physician in the United States may be described as an internist. Another term, hospitalist, was introduced in 1996, to describe US specialists in internal medicine who work largely or exclusively in hospitals. Such 'hospitalists' now make up about 19% of all US general internists, who are often called general physicians in Commonwealth countries. This original use, as distinct from surgeon, is common in most of the world including the United Kingdom and other Commonwealth countries (such as Australia, Bangladesh, India, New Zealand, Pakistan, South Africa, Sri Lanka, and Zimbabwe), as well as in places as diverse as Brazil, Hong Kong, Indonesia, Japan, Ireland, and Taiwan. In such places, the more general English terms doctor or medical practitioner are prevalent, describing any practitioner of medicine (whom an American would likely call a physician, in the broad sense). In Commonwealth countries, specialist pediatricians and geriatricians are also described as specialist physicians who have sub-specialized by age of patient rather than by organ system. Physician and surgeon Around the world, the combined term "physician and surgeon" is used to describe either a general practitioner or any medical practitioner irrespective of specialty. This usage still shows the original meaning of physician and preserves the old difference between a physician, as a practitioner of physic, and a surgeon. The term may be used by state medical boards in the United States, and by equivalent bodies in Canadian provinces, to describe any medical practitioner. North America In modern English, the term physician is used in two main ways, with relatively broad and narrow meanings respectively. This is the result of history and is often confusing. These meanings and variations are explained below. In the United States and Canada, the term physician describes all medical practitioners holding a professional medical degree. The American Medical Association, established in 1847, as well as the American Osteopathic Association, founded in 1897, both currently use the term physician to describe members. However, the American College of Physicians, established in 1915, does not: its title uses physician in its original sense. American physicians The vast majority of physicians trained in the United States have a Doctor of Medicine degree, and use the initials M.D. A smaller number attend osteopathic medical schools and have a Doctor of Osteopathic Medicine degree and use the initials D.O. The World Directory of Medical Schools lists both MD and DO granting schools as medical schools located in the United States. After completion of medical school, physicians complete a residency in the specialty in which they will practice. Subspecialties require the completion of a fellowship after residency. Both MD and DO physicians participate in the National Resident Matching Program (NRMP) and attend ACGME-accredited residencies and fellowships across all medical specialties to obtain licensure. All boards of certification now require that physicians demonstrate, by examination, continuing mastery of the core knowledge and skills for a chosen specialty. Recertification varies by particular specialty between every seven and every ten years. Primary care Primary care physicians guide patients in preventing disease and detecting health problems early while they are still treatable. They are divided into two types: family medicine doctors and internal medicine doctors. Family doctors, or family physicians, are trained to care for patients of any age, while internists are trained to care for adults. Family doctors receive training in a variety of care and are therefore also referred to as general practitioners. Family medicine grew out of the general practitioner movement of the 1960s in response to the growing specialization in medicine that was seen as threatening to the doctor-patient relationship and continuity of care. Podiatry In the United States, the American Podiatric Medical Association (APMA) defines podiatrists as physicians and surgeons who treat the foot, ankle, and associated structures of the leg. Podiatrists undergo training with the Doctor of Podiatric Medicine (DPM) degree. The American Medical Association (AMA), however, advocates for the definition of a physician as "an individual possessing degree of either a Doctor of Medicine or Doctor of Osteopathic Medicine." In the US, podiatrists are required to complete three to four years of podiatry residency upon graduating with a DPM degree. After residency, one to two years of fellowship programs are available in plastic surgery, foot and ankle reconstructive surgery, sports medicine, and wound care. Podiatry residencies and/ or fellowships are not accredited by the ACGME. The overall scope of podiatric practice varies from state to state and is not similar to that of physicians holding an MD or DO degree. DPM is also available at one Canadian university, namely the ; students are typically required to complete an internship in New York prior to obtaining their professional degree. The World Directory of Medical Schools does not list US or Canadian schools of podiatric medicine as medical schools and only lists US-granted MD, DO, and Canadian MD programs as medical schools for the respective regions. Shortage Many countries in the developing world have the problem of too few physicians. In 2015, the Association of American Medical Colleges warned that the US will face a doctor shortage of as many as 90,000 by 2025. Social role and world view Biomedicine Within Western culture and over recent centuries, medicine has become increasingly based on scientific reductionism and materialism. This style of medicine is now dominant throughout the industrialized world, and is often termed biomedicine by medical anthropologists. Biomedicine "formulates the human body and disease in a culturally distinctive pattern", and is a world view learnt by medical students. Within this tradition, the medical model is a term for the complete "set of procedures in which all doctors are trained", including mental attitudes. A particularly clear expression of this world view, currently dominant among conventional physicians, is evidence-based medicine. Within conventional medicine, most physicians still pay heed to their ancient traditions: In this Western tradition, physicians are considered to be members of a learned profession, and enjoy high social status, often combined with expectations of a high and stable income and job security. However, medical practitioners often work long and inflexible hours, with shifts at unsociable times. Their high status is partly from their extensive training requirements, and also because of their occupation's special ethical and legal duties. The term traditionally used by physicians to describe a person seeking their help is the word patient (although one who visits a physician for a routine check-up may also be so described). This word patient is an ancient reminder of medical duty, as it originally meant 'one who suffers'. The English noun comes from the Latin word patiens, the present participle of the deponent verb, patior, meaning 'I am suffering', and akin to the Greek verb (romanized: paschein, lit. to suffer) and its cognate noun πάθος (pathos, suffering). Physicians in the original, narrow sense (specialist physicians or internists, see above) are commonly members or fellows of professional organizations, such as the American College of Physicians or the Royal College of Physicians in the United Kingdom, and such hard-won membership is itself a mark of status. Alternative medicine While contemporary biomedicine has distanced itself from its ancient roots in religion and magic, many forms of traditional medicine and alternative medicine continue to espouse vitalism in various guises: "As long as life had its own secret properties, it was possible to have sciences and medicines based on those properties". The US National Center for Complementary and Alternative Medicine (NCCAM) classifies complementary and alternative medicine therapies into five categories or domains, including: alternative medical systems, or complete systems of therapy and practice; mind-body interventions, or techniques designed to facilitate the mind's effect on bodily functions and symptoms; biologically based systems including herbalism; and manipulative and body-based methods such as chiropractic and massage therapy. In considering these alternate traditions that differ from biomedicine (see above), medical anthropologists emphasize that all ways of thinking about health and disease have a significant cultural content, including conventional western medicine. Ayurveda, Unani medicine, and homeopathy are popular types of alternative medicine. Physicians' own health Some commentators have argued that physicians have duties to serve as role models for the general public in matters of health, for example by not smoking cigarettes. Indeed, in most western nations relatively few physicians smoke, and their professional knowledge does appear to have a beneficial effect on their health and lifestyle. According to a study of male physicians in the United States, life expectancy is slightly higher for physicians (73 years for white and 69 years for black) than lawyers or many other highly educated professionals. Causes of death which are less likely to occur in physicians than the general population include respiratory disease (including pneumonia, pneumoconioses, COPD, but excluding emphysema and other chronic airway obstruction), alcohol-related deaths, rectosigmoid and anal cancers, and bacterial diseases. Physicians do experience exposure to occupational hazards, and there is a well-known aphorism that "doctors make the worst patients". Causes of death that are shown to be higher in the physician population include suicide among doctors and self-inflicted injury, drug-related causes, traffic accidents, and cerebrovascular and ischaemic heart disease. Physicians are also prone to occupational burnout. This manifests as a long-term stress reaction characterized by poorer quality of care towards patients, emotional exhaustion, a feeling of decreased personal achievement, and others. A study by the Agency for Healthcare Research and Quality reported that time pressure was the greatest cause of burnout; a survey from the American Medical Association reported that more than half of all respondents chose "too many bureaucratic tasks" as the leading cause of burnout. Education and training Medical education and career pathways for doctors vary considerably across the world. All medical practitioners In all developed countries, entry-level medical education programs are tertiary-level courses, undertaken at a medical school attached to a university. Depending on jurisdiction and university, entry may follow directly from secondary school or require pre-requisite undergraduate education. The former commonly takes five or six years to complete. Programs that require previous undergraduate education (typically a three- or four-year degree, often in science) are usually four or five years in length. Hence, gaining a basic medical degree may typically take from five to eight years, depending on jurisdiction and university. Following the completion of entry-level training, newly graduated medical practitioners are often required to undertake a period of supervised practice before full registration is granted, typically one or two years. This may be referred to as an "internship", as the "foundation" years in the UK, or as "conditional registration". Some jurisdictions, including the United States, require residencies for practice. Medical practitioners hold a medical degree specific to the university from which they graduated. This degree qualifies the medical practitioner to become licensed or registered under the laws of that particular country, and sometimes of several countries, subject to requirements for an internship or conditional registration. Specialists in internal medicine Specialty training is begun immediately following completion of entry-level training, or even before. In other jurisdictions, junior medical doctors must undertake generalist (un-streamed) training for one or more years before commencing specialization. Hence, depending on the jurisdiction, a specialist physician (internist) often does not achieve recognition as a specialist until twelve or more years after commencing basic medical training—five to eight years at university to obtain a basic medical qualification, and up to another nine years to become a specialist. Regulation In most jurisdictions, physicians (in either sense of the word) need government permission to practice. Such permission is intended to promote public safety, and often to protect government spending, as medical care is commonly subsidized by national governments. In some jurisdictions such as in Singapore, it is common for physicians to inflate their qualifications with the title "Dr" in correspondence or namecards, even if their qualifications are limited to a basic (e.g., bachelor level) degree. In other countries such as Germany, only physicians holding an academic doctorate may call themselves doctor – on the other hand, the European Research Council has decided that the German medical doctorate does not meet the international standards of a PhD research degree. All medical practitioners Among the English-speaking countries, this process is known either as licensure as in the United States, or as registration in the United Kingdom, other Commonwealth countries, and Ireland. Synonyms in use elsewhere include colegiación in Spain, ishi menkyo in Japan, autorisasjon in Norway, Approbation in Germany, and in Greece. In France, Italy and Portugal, civilian physicians must be members of the Order of Physicians to practice medicine. In some countries, including the United Kingdom and Ireland, the profession largely regulates itself, with the government affirming the regulating body's authority. The best-known example of this is probably the General Medical Council of Britain. In all countries, the regulating authorities will revoke permission to practice in cases of malpractice or serious misconduct. In the large English-speaking federations (United States, Canada, Australia), the licensing or registration of medical practitioners is done at a state or provincial level, or nationally as in New Zealand. Australian states usually have a "Medical Board", which has now been replaced by the Australian Health Practitioner Regulation Agency (AHPRA) in most states, while Canadian provinces usually have a "College of Physicians and Surgeons". All American states have an agency that is usually called the "Medical Board", although there are alternate names such as "Board of Medicine", "Board of Medical Examiners", "Board of Medical Licensure", "Board of Healing Arts" or some other variation. After graduating from a first-professional school, physicians who wish to practice in the US usually take standardized exams, such as the USMLE for a Doctor in Medicine. Specialists in internal medicine Most countries have some method of officially recognizing specialist qualifications in all branches of medicine, including internal medicine. Sometimes, this aims to promote public safety by restricting the use of hazardous treatments. Other reasons for regulating specialists may include standardization of recognition for hospital employment and restriction on which practitioners are entitled to receive higher insurance payments for specialist services. Performance and professionalism supervision The issue of medical errors, drug abuse, and other issues in physician professional behavior received significant attention across the world, in particular following a critical 2000 report which "arguably launched" the patient-safety movement. In the US, as of 2006 there were few organizations that systematically monitored performance. In the US, only the Department of Veterans Affairs randomly drug tests physicians, in contrast to drug testing practices for other professions that have a major impact on public welfare. Licensing boards at the US state-level depend upon continuing education to maintain competence. Through the utilization of the National Practitioner Data Bank, Federation of State Medical Boards' disciplinary report, and American Medical Association Physician Profile Service, the 67 State Medical Boards continually self-report any adverse/disciplinary actions taken against a licensed physician in order that the other Medical Boards in which the physician holds or is applying for a medical license will be properly notified so that corrective, reciprocal action can be taken against the offending physician. In Europe, as of 2009 the health systems are governed according to various national laws, and can also vary according to regional differences similar to the United States.
Biology and health sciences
General concepts
null
23316
https://en.wikipedia.org/wiki/Pound%20%28mass%29
Pound (mass)
The pound or pound-mass is a unit of mass used in both the British imperial and United States customary systems of measurement. Various definitions have been used; the most common today is the international avoirdupois pound, which is legally defined as exactly , and which is divided into 16 avoirdupois ounces. The international standard symbol for the avoirdupois pound is lb; an alternative symbol (when there might otherwise be a risk of confusion with the pound-force) is lbm (for most pound definitions), # (chiefly in the U.S.), and or ̶ (specifically for the apothecaries' pound). The unit is descended from the Roman (hence the symbol lb, descended from the scribal abbreviation, ). The English word pound comes from the Roman ('the weight measured in '), and is cognate with, among others, German , Dutch , and Swedish . These units are now designated as historical and are no longer in common usage, being replaced by the metric system. Usage of the unqualified term pound reflects the historical conflation of mass and weight. This accounts for the modern distinguishing terms pound-mass and pound-force. Etymology The word 'pound' and its cognates ultimately derive from a borrowing into Proto-Germanic of the Latin expression ('the weight measured in '), in which the word is the ablative singular of the Latin noun ('weight'). Current use The United States and the Commonwealth of Nations agreed upon common definitions for the pound and the yard. Since 1 July 1959, the international avoirdupois pound (symbol lb) has been defined as exactly . In the United Kingdom, the use of the international pound was implemented in the Weights and Measures Act 1963. An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7,000 grains. The conversion factor between the kilogram and the international pound was therefore chosen to be divisible by 7 with a terminating decimal representation, and an (international) grain is thus equal to exactly . In the United Kingdom, the process of metrication and European units of measurement directives were expected to eliminate the use of the pound and ounce, but in 2007 the European Commission abandoned the requirement for metric-only labelling on packaged goods there, and allowed for dual metric–imperial marking to continue indefinitely. In the United States, the Metric Conversion Act of 1975 declared the metric system to be the "preferred system of weights and measures" but did not suspend use of United States customary units, and the United States is the only industrialised country where commercial activities do not predominantly use the metric system, despite many efforts to do so, and the pound remains widely used as one of the key customary units. Historical use Historically, in different parts of the world, at different points in time, and for different applications, the pound (or its translation) has referred to broadly similar but not identical standards of mass or force. Roman The (Latin for 'scale'/'balance') is an ancient Roman unit of mass that is now equivalent to . It was divided into 12 (singular: ), or ounces. The is the origin of the abbreviation for pound, "lb". In Britain A number of different definitions of the pound have historically been used in Britain. Among these are the avoirdupois pound, which is the common pound used for weights, and the obsolete tower, merchants' and London pounds. The troy pound and ounce remain in use only for the weight of precious metals, especially in their trade. The weights of traded precious metals, such as gold and silver, are normally quoted just in ounces (e.g. "500 ounces") and, when the type of ounce is not explicitly stated, the troy system is assumed. The pound sterling money system, which was introduced during the reign of King Offa of Mercia (757–96), was based originally on a Saxon pound of silver. After the Norman conquest the Saxon pound was known as the tower pound or moneyer's pound. In 1528, during the reign of Henry VIII, the coinage standard was changed by parliament from the tower pound to the troy pound. Avoirdupois pound The avoirdupois pound, also known as the wool pound, first came into general use c. 1300. It was initially equal to 6,992 troy grains. The pound avoirdupois was divided into 16 ounces. During the reign of Queen Elizabeth I, the avoirdupois pound was redefined as 7,000 troy grains. Since then, the grain has often been an integral part of the avoirdupois system. By 1758, two Elizabethan Exchequer standard weights for the avoirdupois pound existed, and when measured in troy grains they were found to be of 7,002 grains and 6,999 grains. Imperial Standard Pound In the United Kingdom, weights and measures have been defined by a long series of Acts of Parliament, the intention of which has been to regulate the sale of commodities. Materials traded in the marketplace are quantified according to accepted units and standards in order to avoid fraud. The standards themselves are legally defined so as to facilitate the resolution of disputes brought to the courts; only legally defined measures will be recognised by the courts. Quantifying devices used by traders (weights, weighing machines, containers of volumes, measures of length) are subject to official inspection, and penalties apply if they are fraudulent. The Weights and Measures Act 1878 (41 & 42 Vict. c. 49) marked a major overhaul of the British system of weights and measures, and the definition of the pound given there remained in force until the 1960s. The pound was defined thus (Section 4) "The ... platinum weight ... deposited in the Standards department of the Board of Trade ... shall continue to be the imperial standard of ... weight ... and the said platinum weight shall continue to be the Imperial Standard for determining the Imperial Standard Pound for the United Kingdom". Paragraph 13 states that the weight of this standard shall be called the Imperial Standard Pound, and that all other weights mentioned in the act and permissible for commerce shall be ascertained from it alone. The first schedule of the act gave more details of the standard pound: it is a platinum cylinder nearly high, and diameter, and the edges are carefully rounded off. It has a groove about from the top, to allow the cylinder to be lifted using an ivory fork. It was constructed following the destruction of the Houses of Parliament by fire in 1834, and is stamped "P.S. 1844, 1 lb" (P.S. stands for "Parliamentary Standard"). Redefinition in terms of the kilogram The British Weights and Measures Act 1878 (41 & 42 Vict. c. 49) said that contracts worded in terms of metric units would be deemed by the courts to be made according to the Imperial units defined in the Act, and a table of metric equivalents was supplied so that the Imperial equivalents could be legally calculated. This defined, in UK law, metric units in terms of Imperial ones. The equivalence for the pound was given as 1 lb = or 0.45359 kg, which made the kilogram equivalent to about . In 1883, it was determined jointly by the standards department of the British Board of Trade and the Bureau International that was a better approximation, and this figure, rounded to was given legal status by an Order in Council in May 1898. In 1959, based on further measurements and international coordination, the International Yard and Pound Agreement defined an "international pound" as being equivalent to exactly . This meant that the existing legal definition of the UK pound differed from the international standard pound by . To remedy this, the pound was again redefined in the United Kingdom by the Weights and Measures Act 1963 to match the international pound, stating: "the pound shall be 0.453 592 37 kilogramme exactly", a definition which remains valid to the present day. The 2019 revision of the SI means that the pound is now defined precisely in terms of fundamental constants, ending the era of its definition in terms of physical prototypes. Troy pound A troy pound (abbreviated lb t) is equal to 12 troy ounces and to 5,760 grains, that is exactly grams. Troy weights were used in England by jewellers. Apothecaries also used the troy pound and ounce, but added the drachms and scruples unit in the apothecaries' system of weights. Troy weight may take its name from the French market town of Troyes in France where English merchants traded at least as early as the early 9th century. The troy pound is no longer in general use or a legal unit for trade (it was abolished in the United Kingdom on 6 January 1879 by the Weights and Measures Act 1878), but the troy ounce, of a troy pound, is still used for measurements of gems such as opals, and precious metals such as silver, platinum and particularly gold. Tower pound A tower pound is equal to 12 tower ounces and to 5,400 troy grains, which equals around 350 grams. The tower pound is the historical weight standard that was used for England's coinage. Before the Norman conquest in 1066, the tower pound was known as the Saxon pound. During the reign of King Offa (757–96) of Mercia, a Saxon pound of silver was used to set the original weight of a pound sterling. From one Saxon pound of silver (that is a tower pound) the king had 240 silver pennies minted. In the pound sterling monetary system, twelve pennies equaled a shilling and twenty shillings equaled a pound sterling. The tower pound was referenced to a standard prototype found in the Tower of London. The tower system ran concurrently with the avoirdupois and troy systems until the reign of Henry VIII, when a royal proclamation dated 1526 required that the troy pound be used for mint purposes instead of the tower pound. No standards of the tower pound are known to have survived. The tower pound was also called the moneyers' pound (referring to the Saxon moneyers before the Norman conquest); the easterling pound, which may refer to traders of eastern Germany, or to traders on the shore of the eastern Baltic sea, or dealers of Asiatic goods who settled at the London Steelyard wharf; and the Rochelle pound by French writers, because it was also in use at La Rochelle. An almost identical weight was employed by the Germans for weighing gold and silver. The mercantile pound (1304) of 6750 troy grains, or 9600 Tower grains, derives from this pound, as 25 shilling-weights or 15 Tower ounces, for general commercial use. Multiple pounds based on the same ounce were quite common. In much of Europe, the apothecaries' and commercial pounds were different numbers of the same ounce. Merchants' pound The merchants' pound (mercantile pound, , or commercial pound) was considered to be composed of 25 rather than 20 Tower shillings of 12 pence. It was equal to 9,600 wheat grains (15 tower ounces or 6,750 grains) and was used in England until the 14th century for goods other than money and medicine ("electuaries"). London pound The London pound is that of the Hansa, as used in their various trading places. The London pound is based on 16 ounces, each ounce divided as the tower ounce. It never became a legal standard in England; the use of this pound waxed and waned with the influence of the Hansa itself. A London pound was equal to 7,200 troy grains (16 troy ounces) or, equivalently, 10,240 tower grains (16 tower ounces). In the United States In the United States, the avoirdupois pound as a unit of mass has been officially defined in terms of the kilogram since the Mendenhall Order of 1893. That order defined the pound to be pounds to a kilogram. The following year, this relationship was refined as pounds to a kilogram, following a determination of the British pound. In 1959, the United States National Bureau of Standards redefined the pound (avoirdupois) to be exactly equal to 0.453 592 37 kilograms, as had been declared by the International Yard and Pound Agreement of that year. According to a 1959 NIST publication, the United States 1894 pound differed from the international pound by approximately one part in 10 million. The difference is so insignificant that it can be ignored for almost all practical purposes. Byzantine litra The Byzantines used a series of measurements known as pounds (, ). The most common was the (, "pound of account"), established by Constantine the Great in 309/310. It formed the basis of the Byzantine monetary system, with one of gold equivalent to 72 . A hundred were known as a (, "hundredweight"). Its weight seems to have decreased gradually from the original to . Due to its association with gold, it was also known as the (, "gold pound") or (, "maritime pound"), but it could also be used as a measure of land, equalling a fortieth of the . The was specifically used for weighing olive oil or wood, and corresponded to 4/5 of the or . Some outlying regions, especially in later times, adopted various local measures, based on Italian, Arab or Turkish measures. The most important of these was the (, "silver pound") of , found in Trebizond and Cyprus, and probably of Arab origin. French livre Since the Middle Ages, various pounds () have been used in France. Since the 19th century, a has referred to the metric pound, 500 g. The is equivalent to about and was used between the late 9th century and the mid-14th century. The or is equivalent to about and was used between the 1350s and the late 18th century. It was introduced by the government of John II. The was set equal to the kilogram by the decree of between 1800 and 1812. This was a form of official metric pound. The (customary unit) was defined as by the decree of 28 March 1812. It was abolished as a unit of mass effective 1 January 1840 by a decree of 4 July 1837, but is still used informally. German and Austrian Pfund Originally derived from the Roman libra, the definition varied throughout the Holy Roman Empire in the Middle Ages and onward. For example, the measures and weights of the Habsburg monarchy were reformed in 1761 by Empress Maria Theresa of Austria. The unusually heavy Habsburg (civil) pound of 16 ounces was later defined in terms of . Bavarian reforms in 1809 and 1811 adopted essentially the same standard as the Austrian pound. In Prussia, a reform in 1816 defined a uniform civil pound in terms of the Prussian foot and distilled water, resulting in a Prussian pound of . Between 1803 and 1815, all German regions west of the River Rhine were under French control, organised in the departements: Roer, Sarre, Rhin-et-Moselle, and Mont-Tonnerre. As a result of the Congress of Vienna, these regions again became part of various German states. However, many of these regions retained the metric system and adopted a metric pound of precisely . In 1854, the pound of 500 g also became the official mass standard of the German Customs Union and was renamed the , but local pounds continued to co-exist with the pound for some time in some German states. Nowadays, the term is sometimes still in use and universally refers to a pound of 500 g. Russian The Russian pound (, ) is an obsolete Russian unit of measurement of mass. It is equal to . In 1899, the was the basic unit of weight, and all other units of weight were formed from it; in particular, a was of a funt, and a was 40 . The was a Scandinavian measurement that varied in weight between regions. From the 17th century onward, it was equal to in Sweden but was abandoned in 1889 when Sweden switched to the metric system. In Norway, the same name was used for a weight of . In Denmark, it equaled . In the 19th century, Denmark followed Germany's lead and redefined the pound as . Portuguese and The Portuguese unit that corresponds to the pounds of different nations is the , equivalent to 16 ounces of , a variant of the Cologne standard. This was introduced in 1499 by Manuel I, king of Portugal. Based on an evaluation of bronze nesting weight piles distributed by Manuel I to different towns, the of Manuel I has been estimated to be of . In the early 19th century, the was evaluated at . In the 15th century, the was of 14 ounces of or . The Portuguese was the same as 2 . There were also of 12.5 and 13 ounces and of 15 and 16 ounces. The or standard was also used. Jersey pound A Jersey pound is an obsolete unit of mass used on the island of Jersey from the 14th century to the 19th century. It was equivalent to about 7,561 grains (). It may have been derived from the French livre poids de marc. Trone pound The trone pound is one of a number of obsolete Scottish units of measurement. It was equivalent to between 21 and 28 avoirdupois ounces (about ). Metric pound In many countries, upon the introduction of a metric system, the pound (or its translation) became an historic and obsolete term, although some have kept it as an informal term without a specific value. In German, the term is , in French , in Dutch , in Spanish and Portuguese , in Italian , and in Danish and Swedish . Though not from the same linguistic origin, the Chinese (, also known as the "catty") in mainland China has a modern definition of exactly , divided into 10 (). Traditionally around , the has been in use for more than two thousand years varying in exact value from one period to another, serving the same purpose as "pound" for the common-use measure of weight. In Hong Kong, for the purposes of commerce and trade between Britain and Imperial China in the preceding centuries, three Chinese catties were equivalent to four British imperial pounds, defining one catty as in weight precisely. Hundreds of older pounds were replaced in this way. Examples of the older pounds are one of around in Spain, Portugal, and Latin America; one of in Norway; and several different ones in what is now Germany. From the introduction of the kilogram scales and measuring devices are denominated only in grams and kilograms. A pound of product must be determined by weighing the product in grams as the use of the pound is not sanctioned for trade within the European Union. Use in weaponry Smoothbore cannon and carronades are currently designated by the weight in imperial pounds of round solid iron shot of diameter to fit the barrel. A cannon that fires a six-pound ball, for example, is called a six-pounder. Standard sizes are 6, 12, 18, 24, 32, and 42 pounds; 60-pounders and 68-pounders also exist, along with other nonstandard weapons using the same scheme. Before the introduction of the metric system, countries that produced their own artillery generally used their national pound for these designations. See carronade. A similar definition, using lead balls, exists for determining the gauge of shotguns and shotgun shells.
Physical sciences
Mass
null
23317
https://en.wikipedia.org/wiki/Proton
Proton
A proton is a stable subatomic particle, symbol , H+, or 1H+ with a positive electric charge of +1 e (elementary charge). Its mass is slightly less than the mass of a neutron and approximately times the mass of an electron (the proton-to-electron mass ratio). Protons and neutrons, each with a mass of approximately one atomic mass unit, are jointly referred to as nucleons (particles present in atomic nuclei). One or more protons are present in the nucleus of every atom. They provide the attractive electrostatic central force which binds the atomic electrons. The number of protons in the nucleus is the defining property of an element, and is referred to as the atomic number (represented by the symbol Z). Since each element is identified by the number of protons in its nucleus, each element has its own atomic number, which determines the number of atomic electrons and consequently the chemical characteristics of the element. The word proton is Greek for "first", and the name was given to the hydrogen nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the hydrogen nucleus (known to be the lightest nucleus) could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a fundamental or elementary particle, and hence a building block of nitrogen and all other heavier atomic nuclei. Although protons were originally considered to be elementary particles, in the modern Standard Model of particle physics, protons are known to be composite particles, containing three valence quarks, and together with neutrons are now classified as hadrons. Protons are composed of two up quarks of charge +e each, and one down quark of charge −e. The rest masses of quarks contribute only about 1% of a proton's mass. The remainder of a proton's mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. The root mean square charge radius of a proton is about 0.84–0.87 fm ( = ). In 2019, two different studies, using different techniques, found this radius to be 0.833 fm, with an uncertainty of ±0.010 fm. Free protons occur occasionally on Earth: thunderstorms can produce protons with energies of up to several tens of MeV. At sufficiently low temperatures and kinetic energies, free protons will bind to electrons. However, the character of such bound protons does not change, and they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, until it is captured by the electron cloud of an atom. The result is a diatomic or polyatomic ion containing hydrogen. In a vacuum, when free electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom, which is chemically a free radical. Such "free hydrogen atoms" tend to react chemically with many other types of atoms at sufficiently low energies. When free hydrogen atoms react with each other, they form neutral hydrogen molecules (H2), which are the most common molecular component of molecular clouds in interstellar space. Free protons are routinely used for accelerators for proton therapy or various particle physics experiments, with the most powerful example being the Large Hadron Collider. Description Protons are spin- fermions and are composed of three valence quarks, making them baryons (a sub-type of hadrons). The two up quarks and one down quark of a proton are held together by the strong force, mediated by gluons. A modern perspective has a proton composed of the valence quarks (up, up, down), the gluons, and transitory pairs of sea quarks. Protons have a positive charge distribution, which decays approximately exponentially, with a root mean square charge radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be bound together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol "H") is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons. History The concept of a hydrogen-like particle as a constituent of other atoms was developed over a long period. As early as 1815, William Prout proposed that all atoms are composed of hydrogen atoms (which he called "protyles"), based on a simplistic interpretation of early values of atomic weights (see Prout's hypothesis), which was disproved when more accurate values were measured. In 1886, Eugen Goldstein discovered canal rays (also known as anode rays) and showed that they were positively charged particles (ions) produced from gases. However, since particles from different gases had different values of charge-to-mass ratio (q/m), they could not be identified with a single particle, unlike the negative electrons discovered by J. J. Thomson. Wilhelm Wien in 1898 identified the hydrogen ion as the particle with the highest charge-to-mass ratio in ionized gases. Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table (its atomic number) is equal to its nuclear charge. This was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra (More details in Atomic number under Moseley's 1913 experiment). In 1917, Rutherford performed experiments (reported in 1919 and 1925) which proved that the hydrogen nucleus is present in other nuclei, a result usually described as the discovery of protons. These experiments began after Rutherford observed that when alpha particles would strike air, Rutherford could detect scintillation on a zinc sulfide screen produced at a distance well beyond the distance of alpha-particle range of travel but instead corresponding to the range of travel of hydrogen atoms (protons). After experimentation, Rutherford traced the reaction to the nitrogen in air and found that when alpha particles were introduced into pure nitrogen gas, the effect was larger. In 1919, Rutherford assumed that the alpha particle merely knocked a proton out of nitrogen, turning it into carbon. After observing Blackett's cloud chamber images in 1925, Rutherford realized that the alpha particle was absorbed. If the alpha particle were not absorbed, then it would knock a proton off of nitrogen creating 3 charged particles (a negatively charged carbon, a proton, and an alpha particle). It can be shown that the 3 charged particles would create three tracks in the cloud chamber, but instead only 2 tracks in the cloud chamber were observed. The alpha particle is absorbed by the nitrogen atom. After capture of the alpha particle, a hydrogen nucleus is ejected, creating a net result of 2 charged particles (a proton and a positively charged oxygen) which make 2 tracks in the cloud chamber. Heavy oxygen (17O), not carbon or fluorine, is the product. This was the first reported nuclear reaction, . Rutherford at first thought of our modern "p" in this equation as a hydrogen ion, . Depending on one's perspective, either 1919 (when it was seen experimentally as derived from another source than hydrogen) or 1920 (when it was recognized and proposed as an elementary particle) may be regarded as the moment when the proton was 'discovered'. Rutherford knew hydrogen to be the simplest and lightest element and was influenced by Prout's hypothesis that hydrogen was the building block of all elements. Discovery that the hydrogen nucleus is present in other nuclei as an elementary particle led Rutherford to give the hydrogen nucleus a special name as a particle, since he suspected that hydrogen, the lightest element, contained only one of these particles. He named this new fundamental building block of the nucleus the proton, after the neuter singular of the Greek word for "first", . However, Rutherford also had in mind the word protyle as used by Prout. Rutherford spoke at the British Association for the Advancement of Science at its Cardiff meeting beginning 24 August 1920. At the meeting, he was asked by Oliver Lodge for a new name for the positive hydrogen nucleus to avoid confusion with the neutral hydrogen atom. He initially suggested both proton and prouton (after Prout). Rutherford later reported that the meeting had accepted his suggestion that the hydrogen nucleus be named the "proton", following Prout's word "protyle". The first use of the word "proton" in the scientific literature appeared in 1920. Occurrence One or more bound protons are present in the nucleus of every atom. Free protons are found naturally in a number of situations in which energies or temperatures are high enough to separate them from electrons, for which they have some affinity. Free protons exist in plasmas in which temperatures are too high to allow them to combine with electrons. Free protons of high energy and velocity make up 90% of cosmic rays, which propagate through the interstellar medium. Free protons are emitted directly from atomic nuclei in some rare types of radioactive decay. Protons also result (along with electrons and antineutrinos) from the radioactive decay of free neutrons, which are unstable. Stability The spontaneous decay of free protons has never been observed, and protons are therefore considered stable particles according to the Standard Model. However, some grand unified theories (GUTs) of particle physics predict that proton decay should take place with lifetimes between 1031 and 1036 years. Experimental searches have established lower bounds on the mean lifetime of a proton for various assumed decay products. Experiments at the Super-Kamiokande detector in Japan gave lower limits for proton mean lifetime of for decay to an antimuon and a neutral pion, and for decay to a positron and a neutral pion. Another experiment at the Sudbury Neutrino Observatory in Canada searched for gamma rays resulting from residual nuclei resulting from the decay of a proton from oxygen-16. This experiment was designed to detect decay to any product, and established a lower limit to a proton lifetime of . However, protons are known to transform into neutrons through the process of electron capture (also called inverse beta decay). For free protons, this process does not occur spontaneously but only when energy is supplied. The equation is: + → + The process is reversible; neutrons can convert back to protons through beta decay, a common form of radioactive decay. In fact, a free neutron decays this way, with a mean lifetime of about 15 minutes. A proton can also transform into a neutron through beta plus decay (β+ decay). According to quantum field theory, the mean proper lifetime of protons becomes finite when they are accelerating with proper acceleration , and decreases with increasing . Acceleration gives rise to a non-vanishing probability for the transition . This was a matter of concern in the later 1990s because is a scalar that can be measured by the inertial and coaccelerated observers. In the inertial frame, the accelerating proton should decay according to the formula above. However, according to the coaccelerated observer the proton is at rest and hence should not decay. This puzzle is solved by realizing that in the coaccelerated frame there is a thermal bath due to Fulling–Davies–Unruh effect, an intrinsic effect of quantum field theory. In this thermal bath, experienced by the proton, there are electrons and antineutrinos with which the proton may interact according to the processes: , and . Adding the contributions of each of these processes, one should obtain . Quarks and the mass of a proton In quantum chromodynamics, the modern theory of the nuclear force, most of the mass of protons and neutrons is explained by special relativity. The mass of a proton is about 80–100 times greater than the sum of the rest masses of its three valence quarks, while the gluons have zero rest mass. The extra energy of the quarks and gluons in a proton, as compared to the rest energy of the quarks alone in the QCD vacuum, accounts for almost 99% of the proton's mass. The rest mass of a proton is, thus, the invariant mass of the system of moving quarks and gluons that make up the particle, and, in such systems, even the energy of massless particles confined to a system is still measured as part of the rest mass of the system. Two terms are used in referring to the mass of the quarks that make up protons: current quark mass refers to the mass of a quark by itself, while constituent quark mass refers to the current quark mass plus the mass of the gluon particle field surrounding the quark. These masses typically have very different values. The kinetic energy of the quarks that is a consequence of confinement is a contribution (see Mass in special relativity). Using lattice QCD calculations, the contributions to the mass of the proton are the quark condensate (~9%, comprising the up and down quarks and a sea of virtual strange quarks), the quark kinetic energy (~32%), the gluon kinetic energy (~37%), and the anomalous gluonic contribution (~23%, comprising contributions from condensates of all quark flavors). The constituent quark model wavefunction for the proton is The internal dynamics of protons are complicated, because they are determined by the quarks' exchanging gluons, and interacting with various vacuum condensates. Lattice QCD provides a way of calculating the mass of a proton directly from the theory to any accuracy, in principle. The most recent calculations claim that the mass is determined to better than 4% accuracy, even to 1% accuracy (see Figure S5 in Dürr et al.). These claims are still controversial, because the calculations cannot yet be done with quarks as light as they are in the real world. This means that the predictions are found by a process of extrapolation, which can introduce systematic errors. It is hard to tell whether these errors are controlled properly, because the quantities that are compared to experiment are the masses of the hadrons, which are known in advance. These recent calculations are performed by massive supercomputers, and, as noted by Boffi and Pasquini: "a detailed description of the nucleon structure is still missing because ... long-distance behavior requires a nonperturbative and/or numerical treatment ..." More conceptual approaches to the structure of protons are: the topological soliton approach originally due to Tony Skyrme and the more accurate AdS/QCD approach that extends it to include a string theory of gluons, various QCD-inspired models like the bag model and the constituent quark model, which were popular in the 1980s, and the SVZ sum rules, which allow for rough approximate mass calculations. These methods do not have the same accuracy as the more brute-force lattice QCD methods, at least not yet. Charge radius The CODATA recommended value of a proton's charge radius is The radius of the proton is defined by a formula that can be calculated by quantum electrodynamics and be derived from either atomic spectroscopy or by electron–proton scattering. The formula involves a form-factor related to the two-dimensional parton diameter of the proton. A value from before 2010 is based on scattering electrons from protons followed by complex calculation involving scattering cross section based on Rosenbluth equation for momentum-transfer cross section), and based on studies of the atomic energy levels of hydrogen and deuterium. In 2010 an international research team published a proton charge radius measurement via the Lamb shift in muonic hydrogen (an exotic atom made of a proton and a negatively charged muon). As a muon is 200 times heavier than an electron, resulting in a smaller atomic orbital, it is much more sensitive to the proton's charge radius and thus allows a more precise measurement. Subsequent improved scattering and electron-spectroscopy measurements agree with the new small radius. Work continues to refine and check this new value. Pressure inside the proton Since the proton is composed of quarks confined by gluons, an equivalent pressure that acts on the quarks can be defined. The size of that pressure and other details about it are controversial. In 2018 this pressure was reported to be on the order 1035 Pa, which is greater than the pressure inside a neutron star. It was said to be maximum at the centre, positive (repulsive) to a radial distance of about 0.6 fm, negative (attractive) at greater distances, and very weak beyond about 2 fm. These numbers were derived by a combination of a theoretical model and experimental Compton scattering of high-energy electrons. However, these results have been challenged as also being consistent with zero pressure and as effectively providing the pressure profile shape by selection of the model. Charge radius in solvated proton, hydronium The radius of the hydrated proton appears in the Born equation for calculating the hydration enthalpy of hydronium. Interaction of free protons with ordinary matter Although protons have affinity for oppositely charged electrons, this is a relatively low-energy interaction and so free protons must lose sufficient velocity (and kinetic energy) in order to become closely associated and bound to electrons. High energy protons, in traversing ordinary matter, lose energy by collisions with atomic nuclei, and by ionization of atoms (removing electrons) until they are slowed sufficiently to be captured by the electron cloud in a normal atom. However, in such an association with an electron, the character of the bound proton is not changed, and it remains a proton. The attraction of low-energy free protons to any electrons present in normal matter (such as the electrons in normal atoms) causes free protons to stop and to form a new chemical bond with an atom. Such a bond happens at any sufficiently "cold" temperature (that is, comparable to temperatures at the surface of the Sun) and with any type of atom. Thus, in interaction with any type of normal (non-plasma) matter, low-velocity free protons do not remain free but are attracted to electrons in any atom or molecule with which they come into contact, causing the proton and molecule to combine. Such molecules are then said to be "protonated", and chemically they are simply compounds of hydrogen, often positively charged. Often, as a result, they become so-called Brønsted acids. For example, a proton captured by a water molecule in water becomes hydronium, the aqueous cation . Proton in chemistry Atomic number In chemistry, the number of protons in the nucleus of an atom is known as the atomic number, which determines the chemical element to which the atom belongs. For example, the atomic number of chlorine is 17; this means that each chlorine atom has 17 protons and that all atoms with 17 protons are chlorine atoms. The chemical properties of each atom are determined by the number of (negatively charged) electrons, which for neutral atoms is equal to the number of (positive) protons so that the total charge is zero. For example, a neutral chlorine atom has 17 protons and 17 electrons, whereas a Cl− anion has 17 protons and 18 electrons for a total charge of −1. All atoms of a given element are not necessarily identical, however. The number of neutrons may vary to form different isotopes, and energy levels may differ, resulting in different nuclear isomers. For example, there are two stable isotopes of chlorine: with 35 − 17 = 18 neutrons and with 37 − 17 = 20 neutrons. Hydrogen ion In chemistry, the term proton refers to the hydrogen ion, . Since the atomic number of hydrogen is 1, a hydrogen ion has no electrons and corresponds to a bare nucleus, consisting of a proton (and 0 neutrons for the most abundant isotope protium ). The proton is a "bare charge" with only about 1/64,000 of the radius of a hydrogen atom, and so is extremely reactive chemically. The free proton, thus, has an extremely short lifetime in chemical systems such as liquids and it reacts immediately with the electron cloud of any available molecule. In aqueous solution, it forms the hydronium ion, H3O+, which in turn is further solvated by water molecules in clusters such as [H5O2]+ and [H9O4]+. The transfer of in an acid–base reaction is usually referred to as "proton transfer". The acid is referred to as a proton donor and the base as a proton acceptor. Likewise, biochemical terms such as proton pump and proton channel refer to the movement of hydrated ions. The ion produced by removing the electron from a deuterium atom is known as a deuteron, not a proton. Likewise, removing an electron from a tritium atom produces a triton. Proton nuclear magnetic resonance (NMR) Also in chemistry, the term proton NMR refers to the observation of hydrogen-1 nuclei in (mostly organic) molecules by nuclear magnetic resonance. This method uses the quantized spin magnetic moment of the proton, which is due to its angular momentum (or spin), which in turn has a magnitude of one-half the reduced Planck constant. (). The name refers to examination of protons as they occur in protium (hydrogen-1 atoms) in compounds, and does not imply that free protons exist in the compound being studied. Human exposure The Apollo Lunar Surface Experiments Packages (ALSEP) determined that more than 95% of the particles in the solar wind are electrons and protons, in approximately equal numbers. Protons also have extrasolar origin from galactic cosmic rays, where they make up about 90% of the total particle flux. These protons often have higher energy than solar wind protons, and their intensity is far more uniform and less variable than protons coming from the Sun, the production of which is heavily affected by solar proton events such as coronal mass ejections. Research has been performed on the dose-rate effects of protons, as typically found in space travel, on human health. To be more specific, there are hopes to identify what specific chromosomes are damaged, and to define the damage, during cancer development from proton exposure. Another study looks into determining "the effects of exposure to proton irradiation on neurochemical and behavioral endpoints, including dopaminergic functioning, amphetamine-induced conditioned taste aversion learning, and spatial learning and memory as measured by the Morris water maze. Electrical charging of a spacecraft due to interplanetary proton bombardment has also been proposed for study. There are many more studies that pertain to space travel, including galactic cosmic rays and their possible health effects, and solar proton event exposure. The American Biostack and Soviet Biorack space travel experiments have demonstrated the severity of molecular damage induced by heavy ions on microorganisms including Artemia cysts. Antiproton CPT-symmetry puts strong constraints on the relative properties of particles and antiparticles and, therefore, is open to stringent tests. For example, the charges of a proton and antiproton must sum to exactly zero. This equality has been tested to one part in . The equality of their masses has also been tested to better than one part in . By holding antiprotons in a Penning trap, the equality of the charge-to-mass ratio of protons and antiprotons has been tested to one part in . The magnetic moment of antiprotons has been measured with an error of nuclear Bohr magnetons, and is found to be equal and opposite to that of a proton.
Physical sciences
Physics
null
23318
https://en.wikipedia.org/wiki/Phosphorus
Phosphorus
Phosphorus is a chemical element; it has symbol P and atomic number 15. Elemental phosphorus exists in two major forms, white phosphorus and red phosphorus, but because it is highly reactive, phosphorus is never found as a free element on Earth. It has a concentration in the Earth's crust of about 0.1%, less abundant than hydrogen but more than manganese. In minerals, phosphorus generally occurs as phosphate. Elemental phosphorus was first isolated as white phosphorus in 1669. In white phosphorus, phosphorus atoms are arranged in groups of 4, written as P4. White phosphorus emits a faint glow when exposed to oxygen hence, a name, taken from Greek mythology, meaning 'light-bearer' (Latin ), referring to the "Morning Star", the planet Venus. The term phosphorescence, meaning glow after illumination, has its origin in phosphorus, although phosphorus itself does not exhibit phosphorescence: phosphorus glows due to oxidation of the white (but not red) phosphorus a process now called chemiluminescence. Phosphorus is classified as a pnictogen, together with nitrogen, arsenic, antimony, bismuth, and moscovium. Phosphorus is an element essential to sustaining life largely through phosphates, compounds containing the phosphate ion, PO43−. Phosphates are a component of DNA, RNA, ATP, and phospholipids, complex compounds fundamental to cells. Elemental phosphorus was first isolated from human urine, and bone ash was an important early phosphate source. Phosphate mines contain fossils because phosphate is present in the fossilized deposits of animal remains and excreta. Low phosphate levels are an important limit to growth in a number of plant ecosystems. The vast majority of phosphorus compounds mined are consumed as fertilisers. Phosphate is needed to replace the phosphorus that plants remove from the soil, and its annual demand is rising nearly twice as fast as the growth of the human population. Other applications include organophosphorus compounds in detergents, pesticides, and nerve agents. Characteristics Allotropes Phosphorus has several allotropes that exhibit strikingly diverse properties. The two most common allotropes are white phosphorus and red phosphorus. For both pure and applied uses, the most important allotrope is white phosphorus, often abbreviated WP. White phosphorus is a soft, waxy molecular solid composed of tetrahedra. This tetrahedron is also present in liquid and gaseous phosphorus up to the temperature of when it starts decomposing to molecules. The nature of bonding in this tetrahedron can be described by spherical aromaticity or cluster bonding, that is the electrons are highly delocalized. This has been illustrated by calculations of the magnetically induced currents, which sum up to 29 nA/T, much more than in the archetypical aromatic molecule benzene (11 nA/T). The P4 molecule in the gas phase has a P-P bond length of rg = 2.1994(3) Å as was determined by gas electron diffraction. White phosphorus exists in two crystalline forms: α (alpha) and β (beta). At room temperature, the α-form is stable. It is more common, has cubic crystal structure and at , it transforms into β-form, which has hexagonal crystal structure. These forms differ in terms of the relative orientations of the constituent P4 tetrahedra. White phosphorus is the least stable, the most reactive, the most volatile, the least dense and the most toxic of the allotropes. White phosphorus gradually changes to red phosphorus, accelerated by light and heat. Samples of white phosphorus almost always contain some red phosphorus and accordingly appear yellow. For this reason, white phosphorus that is aged or otherwise impure (e.g., weapons-grade, not lab-grade WP) is also called yellow phosphorus. White phosphorus is highly flammable and pyrophoric (self-igniting) in air; it faintly glows green and blue in the dark when exposed to oxygen. The autoxidation commonly coats samples with white phosphorus pentoxide (): P4 tetrahedra, but with oxygen inserted between the phosphorus atoms and at the vertices. White phosphorus is a napalm additive, and the characteristic odour of combustion is garlicky. White phosphorus is insoluble in water but soluble in carbon disulfide. Thermal decomposition of P4 at 1100 K gives diphosphorus, P2. This species is not stable as a solid or liquid. The dimeric unit contains a triple bond and is analogous to N2. It can also be generated as a transient intermediate in solution by thermolysis of organophosphorus precursor reagents. At still higher temperatures, P2 dissociates into atomic P. Red phosphorus is polymeric in structure. It can be viewed as a derivative of P4 wherein one P-P bond is broken, and one additional bond is formed with the neighbouring tetrahedron resulting in chains of P21 molecules linked by van der Waals forces. Red phosphorus may be formed by heating white phosphorus to or by exposing white phosphorus to sunlight. Phosphorus after this treatment is amorphous. Upon further heating, this material crystallises. In this sense, red phosphorus is not an allotrope, but rather an intermediate phase between the white and violet phosphorus, and most of its properties have a range of values. For example, freshly prepared, bright red phosphorus is highly reactive and ignites at about , though it is more stable than white phosphorus, which ignites at about . After prolonged heating or storage, the color darkens (see infobox images); the resulting product is more stable and does not spontaneously ignite in air. Violet phosphorus is a form of phosphorus that can be produced by day-long annealing of red phosphorus above 550 °C. In 1865, Hittorf discovered that when phosphorus was recrystallised from molten lead, a red/purple form is obtained. Therefore, this form is sometimes known as "Hittorf's phosphorus" (or violet or α-metallic phosphorus). Black phosphorus is the least reactive allotrope and the thermodynamically stable form below . It is also known as β-metallic phosphorus and has a structure somewhat resembling that of graphite. It is obtained by heating white phosphorus under high pressures (about ). It can also be produced at ambient conditions using metal salts, e.g. mercury, as catalysts. In appearance, properties, and structure, it resembles graphite, being black and flaky, a conductor of electricity, and has puckered sheets of linked atoms. Another form, scarlet phosphorus, is obtained by allowing a solution of white phosphorus in carbon disulfide to evaporate in sunlight. Chemiluminescence When first isolated, it was observed that the green glow emanating from white phosphorus would persist for a time in a stoppered jar, but then cease. Robert Boyle in the 1680s ascribed it to "debilitation" of the air. In fact, this process is caused by the phosphorus reacting with oxygen in the air; in a sealed container, this process will eventually stop when all the oxygen in the container is consumed. By the 18th century, it was known that in pure oxygen, phosphorus does not glow at all; there is only a range of partial pressures at which it does. Heat can be applied to drive the reaction at higher pressures. In 1974, the glow was explained by R. J. van Zee and A. U. Khan. A reaction with oxygen takes place at the surface of the solid (or liquid) phosphorus, forming the short-lived molecules HPO and that both emit visible light. The reaction is slow and only very little of the intermediates are required to produce the luminescence, hence the extended time the glow continues in a stoppered jar. Since its discovery, phosphors and phosphorescence were used loosely to describe substances that shine in the dark without burning. Although the term phosphorescence is derived from phosphorus, the reaction that gives phosphorus its glow is properly called chemiluminescence (glowing due to a cold chemical reaction), not phosphorescence (re-emitting light that previously fell onto a substance and excited it). Isotopes There are 22 known isotopes of phosphorus, ranging from to . Only is stable and is therefore present at 100% abundance. The half-integer nuclear spin and high abundance of 31P make phosphorus-31 NMR spectroscopy a very useful analytical tool in studies of phosphorus-containing samples. Two radioactive isotopes of phosphorus have half-lives suitable for biological scientific experiments. These are: , a beta-emitter (1.71 MeV) with a half-life of 14.3 days, which is used routinely in life-science laboratories, primarily to produce radiolabeled DNA and RNA probes, e.g. for use in Northern blots or Southern blots. , a beta-emitter (0.25 MeV) with a half-life of 25.4 days. It is used in life-science laboratories in applications in which lower energy beta emissions are advantageous such as DNA sequencing. The high-energy beta particles from penetrate skin and corneas and any ingested, inhaled, or absorbed is readily incorporated into bone and nucleic acids. For these reasons, Occupational Safety and Health Administration in the United States, and similar institutions in other developed countries require personnel working with to wear lab coats, disposable gloves, and safety glasses or goggles to protect the eyes, and avoid working directly over open containers. Monitoring personal, clothing, and surface contamination is also required. Shielding requires special consideration. The high energy of the beta particles gives rise to secondary emission of X-rays via Bremsstrahlung (braking radiation) in dense shielding materials such as lead. Therefore, the radiation must be shielded with low density materials such as acrylic or other plastic, water, or (when transparency is not required), even wood. Occurrence Universe In 2013, astronomers detected phosphorus in Cassiopeia A, which confirmed that this element is produced in supernovae as a byproduct of supernova nucleosynthesis. The phosphorus-to-iron ratio in material from the supernova remnant could be up to 100 times higher than in the Milky Way in general. In 2020, astronomers analysed ALMA and ROSINA data from the massive star-forming region AFGL 5142, to detect phosphorus-bearing molecules and how they are carried in comets to the early Earth. Crust and organic sources Phosphorus has a concentration in the Earth's crust of about one gram per kilogram (compare copper at about 0.06 grams). It is not found free in nature, but is widely distributed in many minerals, usually as phosphates. Inorganic phosphate rock, which is partially made of apatite (a group of minerals being, generally, pentacalcium triorthophosphate fluoride (hydroxide)), is today the chief commercial source of this element. According to the US Geological Survey (USGS), about 50 percent of the global phosphorus reserves are in Amazigh nations like Morocco, Algeria and Tunisia. 85% of Earth's known reserves are in Morocco with smaller deposits in China, Russia, Florida, Idaho, Tennessee, Utah, and elsewhere. Albright and Wilson in the UK and their Niagara Falls plant, for instance, were using phosphate rock in the 1890s and 1900s from Tennessee, Florida, and the Îles du Connétable (guano island sources of phosphate); by 1950, they were using phosphate rock mainly from Tennessee and North Africa. Organic sources, namely urine, bone ash and (in the latter 19th century) guano, were historically of importance but had only limited commercial success. As urine contains phosphorus, it has fertilising qualities which are still harnessed today in some countries, including Sweden, using methods for reuse of excreta. To this end, urine can be used as a fertiliser in its pure form or part of being mixed with water in the form of sewage or sewage sludge. Compounds Phosphorus(V) The most prevalent compounds of phosphorus are derivatives of phosphate (PO43−), a tetrahedral anion. Phosphate is the conjugate base of phosphoric acid, which is produced on a massive scale for use in fertilisers. Being triprotic, phosphoric acid converts stepwise to three conjugate bases: H3PO4 + H2O H3O+ + H2PO4−       Ka1 = 7.25×10−3 H2PO4− + H2O H3O+ + HPO42−       Ka2 = 6.31×10−8 HPO42− + H2O H3O+ +  PO43−        Ka3 = 3.98×10−13 Phosphate exhibits a tendency to form chains and rings containing P-O-P bonds. Many polyphosphates are known, including ATP. Polyphosphates arise by dehydration of hydrogen phosphates such as HPO42− and H2PO4−. For example, the industrially important pentasodium triphosphate (also known as sodium tripolyphosphate, STPP) is produced industrially by the megatonne by this condensation reaction: 2 Na2HPO4 + NaH2PO4 → Na5P3O10 + 2 H2O Phosphorus pentoxide (P4O10) is the acid anhydride of phosphoric acid, but several intermediates between the two are known. This waxy white solid reacts vigorously with water. With metal cations, phosphate forms a variety of salts. These solids are polymeric, featuring P-O-M linkages. When the metal cation has a charge of 2+ or 3+, the salts are generally insoluble, hence they exist as common minerals. Many phosphate salts are derived from hydrogen phosphate (HPO42−). PCl5 and PF5 are common compounds. PF5 is a colourless gas and the molecules have trigonal bipyramidal geometry. PCl5 is a colourless solid which has an ionic formulation of PCl4+ PCl6−, but adopts the trigonal bipyramidal geometry when molten or in the vapour phase. PBr5 is an unstable solid formulated as PBr4+Br−and PI5 is not known. The pentachloride and pentafluoride are Lewis acids. With fluoride, PF5 forms PF6−, an anion that is isoelectronic with SF6. The most important oxyhalide is phosphorus oxychloride, (POCl3), which is approximately tetrahedral. Before extensive computer calculations were feasible, it was thought that bonding in phosphorus(V) compounds involved d orbitals. Computer modeling of molecular orbital theory indicates that this bonding involves only s- and p-orbitals. Phosphorus(III) All four symmetrical trihalides are well known: gaseous PF3, the yellowish liquids PCl3 and PBr3, and the solid PI3. These materials are moisture sensitive, hydrolysing to give phosphorous acid. The trichloride, a common reagent, is produced by chlorination of white phosphorus: P4 + 6 Cl2 → 4 PCl3 The trifluoride is produced from the trichloride by halide exchange. PF3 is toxic because it binds to haemoglobin. Phosphorus(III) oxide, P4O6 (also called tetraphosphorus hexoxide) is the anhydride of P(OH)3, the minor tautomer of phosphorous acid. The structure of P4O6 is like that of P4O10 without the terminal oxide groups. Symmetric phosphorus(III) trithioesters (e.g. P(SMe)3) can be produced from the reaction of white phosphorus and the corresponding disulfide, or phosphorus(III) halides and thiolates. Unlike the corresponding esters, they do not undergo a variant of the Michaelis-Arbuzov reaction with electrophiles, instead reverting to another phosphorus(III) compound through a sulfonium intermediate. Phosphorus(I) and phosphorus(II) These compounds generally feature P–P bonds. Examples include catenated derivatives of phosphine and organophosphines. Compounds containing P=P double bonds have also been observed, although they are rare. Phosphides and phosphines Phosphides arise by reaction of metals with red phosphorus. The alkali metals (group 1) and alkaline earth metals can form ionic compounds containing the phosphide ion, P3−. These compounds react with water to form phosphine. Other phosphides, for example Na3P7, are known for these reactive metals. With the transition metals as well as the monophosphides there are metal-rich phosphides, which are generally hard refractory compounds with a metallic lustre, and phosphorus-rich phosphides which are less stable and include semiconductors. Schreibersite is a naturally occurring metal-rich phosphide found in meteorites. The structures of the metal-rich and phosphorus-rich phosphides can be complex. Phosphine (PH3) and its organic derivatives (PR3) are structural analogues of ammonia (NH3), but the bond angles at phosphorus are closer to 90° for phosphine and its organic derivatives. Phosphine is an ill-smelling, toxic gas. Phosphorus has an oxidation number of −3 in phosphine. Phosphine is produced by hydrolysis of calcium phosphide, Ca3P2. Unlike ammonia, phosphine is oxidised by air. Phosphine is also far less basic than ammonia. Other phosphines are known which contain chains of up to nine phosphorus atoms and have the formula PnHn+2. The highly flammable gas diphosphine (P2H4) is an analogue of hydrazine. Oxoacids Phosphorus oxoacids are extensive, often commercially important, and sometimes structurally complicated. They all have acidic protons bound to oxygen atoms, some have nonacidic protons that are bonded directly to phosphorus and some contain phosphorus–phosphorus bonds. Although many oxoacids of phosphorus are formed, only nine are commercially important, and three of them, hypophosphorous acid, phosphorous acid, and phosphoric acid, are particularly important. Nitrides The PN molecule is considered unstable, but is a product of crystalline phosphorus nitride decomposition at 1100 K. Similarly, H2PN is considered unstable, and phosphorus nitride halogens like F2PN, Cl2PN, Br2PN, and I2PN oligomerise into cyclic polyphosphazenes. For example, compounds of the formula (PNCl2)n exist mainly as rings such as the trimer hexachlorophosphazene. The phosphazenes arise by treatment of phosphorus pentachloride with ammonium chloride:PCl5 + NH4Cl → 1/n (NPCl2)n + 4 HClWhen the chloride groups are replaced by alkoxide (RO−), a family of polymers is produced with potentially useful properties. Sulfides Phosphorus forms a wide range of sulfides, where the phosphorus can be in P(V), P(III) or other oxidation states. The three-fold symmetric P4S3 is used in strike-anywhere matches. P4S10 and P4O10 have analogous structures. Mixed oxyhalides and oxyhydrides of phosphorus(III) are almost unknown. Organophosphorus compounds Compounds with P-C and P-O-C bonds are often classified as organophosphorus compounds. They are widely used commercially. The PCl3 serves as a source of P3+ in routes to organophosphorus(III) compounds. For example, it is the precursor to triphenylphosphine: PCl3 + 6 Na + 3 C6H5Cl → P(C6H5)3 + 6 NaCl Treatment of phosphorus trihalides with alcohols and phenols gives phosphites, e.g. triphenylphosphite: PCl3 + 3 C6H5OH → P(OC6H5)3 + 3 HCl Similar reactions occur for phosphorus oxychloride, affording triphenylphosphate: OPCl3 + 3 C6H5OH → OP(OC6H5)3 + 3 HCl History Etymology The name Phosphorus in Ancient Greece was the name for the planet Venus and is derived from the Greek words (φῶς = light, φέρω = carry), which roughly translates as light-bringer or light carrier. (In Greek mythology and tradition, Augerinus (Αυγερινός = morning star, still in use today), Hesperus or Hesperinus (΄Εσπερος or Εσπερινός or Αποσπερίτης = evening star, still in use today) and Eosphorus (Εωσφόρος = dawnbearer, not in use for the planet after Christianity) are close homologues, and also associated with Phosphorus-the-morning-star). According to the Oxford English Dictionary, the correct spelling of the element is phosphorus. The word phosphorous is the adjectival form of the P3+ valence: so, just as sulfur forms sulfurous and sulfuric compounds, phosphorus forms phosphorous compounds (e.g., phosphorous acid) and P5+ valence phosphoric compounds (e.g., phosphoric acids and phosphates). Discovery The discovery of phosphorus, the first element to be discovered that was not known since ancient times, is credited to the German alchemist Hennig Brand in 1669, although others might have discovered phosphorus around the same time. Brand experimented with urine, which contains considerable quantities of dissolved phosphates from normal metabolism. Working in Hamburg, Brand attempted to create the fabled philosopher's stone through the distillation of some salts by evaporating urine, and in the process produced a white material that glowed in the dark and burned brilliantly. It was named phosphorus mirabilis ("miraculous bearer of light"). Brand's process originally involved letting urine stand for days until it gave off a terrible stench. Then he boiled it down to a paste, heated this paste to a high temperature, and led the vapours through water, where he hoped they would condense to gold. Instead, he obtained a white, waxy substance that glowed in the dark. Brand had discovered phosphorus. Specifically, Brand produced ammonium sodium hydrogen phosphate, . While the quantities were essentially correct (it took about of urine to make about 60 g of phosphorus), it was unnecessary to allow the urine to rot first. Later scientists discovered that fresh urine yielded the same amount of phosphorus. Brand at first tried to keep the method secret, but later sold the recipe for 200 thalers to Johann Daniel Kraft (de) from Dresden. Kraft toured much of Europe with it, including England, where he met with Robert Boyle. The secret—that the substance was made from urine—leaked out, and Johann Kunckel (1630–1703) was able to reproduce it in Sweden (1678). Later, Boyle in London (1680) also managed to make phosphorus, possibly with the aid of his assistant, Ambrose Godfrey-Hanckwitz. Godfrey later made a business of the manufacture of phosphorus. Boyle states that Kraft gave him no information as to the preparation of phosphorus other than that it was derived from "somewhat that belonged to the body of man". This gave Boyle a valuable clue, so that he, too, managed to make phosphorus, and published the method of its manufacture. Later he improved Brand's process by using sand in the reaction (still using urine as base material), 4 + 2 + 10 C → 2 + 10 CO + Robert Boyle was the first to use phosphorus to ignite sulfur-tipped wooden splints, forerunners of modern matches, in 1680. Phosphorus was the 13th element to be discovered. Because of its tendency to spontaneously combust when left alone in air, it is sometimes referred to as "the Devil's element". Bone ash and guano Antoine Lavoisier recognized phosphorus as an element in 1777 after Johan Gottlieb Gahn and Carl Wilhelm Scheele, in 1769, showed that calcium phosphate () is found in bones by obtaining elemental phosphorus from bone ash. Bone ash was the major source of phosphorus until the 1840s. The method started by roasting bones, then employed the use of fire clay retorts encased in a very hot brick furnace to distill out the highly toxic elemental phosphorus product. Alternately, precipitated phosphates could be made from ground-up bones that had been de-greased and treated with strong acids. White phosphorus could then be made by heating the precipitated phosphates, mixed with ground coal or charcoal in an iron pot, and distilling off phosphorus vapour in a retort. Carbon monoxide and other flammable gases produced during the reduction process were burnt off in a flare stack. In the 1840s, world phosphate production turned to the mining of tropical island deposits formed from bird and bat guano (see also Guano Islands Act). These became an important source of phosphates for fertiliser in the latter half of the 19th century. Phosphate rock Phosphate rock, which usually contains calcium phosphate, was first used in 1850 to make phosphorus, and following the introduction of the electric arc furnace by James Burgess Readman in 1888 (patented 1889), elemental phosphorus production switched from the bone-ash heating, to electric arc production from phosphate rock. After the depletion of world guano sources about the same time, mineral phosphates became the major source of phosphate fertiliser production. Phosphate rock production greatly increased after World War II, and remains the primary global source of phosphorus and phosphorus chemicals today. Phosphate rock remains a feedstock in the fertiliser industry, where it is treated with sulfuric acid to produce various "superphosphate" fertiliser products. Incendiaries White phosphorus was first made commercially in the 19th century for the match industry. This used bone ash for a phosphate source, as described above. The bone-ash process became obsolete when the submerged-arc furnace for phosphorus production was introduced to reduce phosphate rock. The electric furnace method allowed production to increase to the point where phosphorus could be used in weapons of war. In World War I, it was used in incendiaries, smoke screens and tracer bullets. A special incendiary bullet was developed to shoot at hydrogen-filled Zeppelins over Britain (hydrogen being highly flammable). During World War II, Molotov cocktails made of phosphorus dissolved in petrol were distributed in Britain to specially selected civilians within the British resistance operation, for defence; and phosphorus incendiary bombs were used in war on a large scale. Burning phosphorus is difficult to extinguish and if it splashes onto human skin it has horrific effects. Early matches used white phosphorus in their composition, which was dangerous due to its toxicity. Murders, suicides and accidental poisonings resulted from its use. (An apocryphal tale tells of a woman attempting to murder her husband with white phosphorus in his food, which was detected by the stew's giving off luminous steam). In addition, exposure to the vapours gave match workers a severe necrosis of the bones of the jaw, known as "phossy jaw". When a safe process for manufacturing red phosphorus was discovered, with its far lower flammability and toxicity, laws were enacted, under the Berne Convention (1906), requiring its adoption as a safer alternative for match manufacture. The toxicity of white phosphorus led to discontinuation of its use in matches. The Allies used phosphorus incendiary bombs in World War II to destroy Hamburg, the place where the "miraculous bearer of light" was first discovered. Production In 2017, the USGS estimated 68 billion tons of world reserves, where reserve figures refer to the amount assumed recoverable at current market prices; 0.261 billion tons were mined in 2016. Critical to contemporary agriculture, its annual demand is rising nearly twice as fast as the growth of the human population. The production of phosphorus may have peaked before 2011 and some scientists predict reserves will be depleted before the end of the 21st century. Phosphorus comprises about 0.1% by mass of the average rock, and consequently, the Earth's supply is vast, though dilute. Wet process Most phosphorus-bearing material is for agriculture fertilisers. In this case where the standards of purity are modest, phosphorus is obtained from phosphate rock by what is called the "wet process." The minerals are treated with sulfuric acid to give phosphoric acid. Phosphoric acid is then neutralized to give various phosphate salts, which comprise fertilizers. In the wet process, phosphorus does not undergo redox. About five tons of phosphogypsum waste are generated per ton of phosphoric acid production. Annually, the estimated generation of phosphogypsum worldwide is 100 to 280 Mt. Thermal process For the use of phosphorus in drugs, detergents, and foodstuff, the standards of purity are high, which led to the development of the thermal process. In this process, phosphate minerals are converted to white phosphorus, which can be purified by distillation. The white phosphorus is then oxidised to phosphoric acid and subsequently neutralised with a base to give phosphate salts. The thermal process is conducted in a submerged-arc furnace which is energy intensive. Presently, about of elemental phosphorus is produced annually. Calcium phosphate (as phosphate rock), mostly mined in Florida and North Africa, can be heated to 1,200–1,500 °C with sand, which is mostly , and coke to produce . The product, being volatile, is readily isolated: Side products from the thermal process include ferrophosphorus, a crude form of , resulting from iron impurities in the mineral precursors. The silicate slag is a useful construction material. The fluoride is sometimes recovered for use in water fluoridation. More problematic is a "mud" containing significant amounts of white phosphorus. Production of white phosphorus is conducted in large facilities in part because it is energy intensive. The white phosphorus is transported in molten form. Some major accidents have occurred during transportation. Historical routes Historically, before the development of mineral-based extractions, white phosphorus was isolated on an industrial scale from bone ash. In this process, the tricalcium phosphate in bone ash is converted to monocalcium phosphate with sulfuric acid: Monocalcium phosphate is then dehydrated to the corresponding metaphosphate: When ignited to a white heat (~1300 °C) with charcoal, calcium metaphosphate yields two-thirds of its weight of white phosphorus while one-third of the phosphorus remains in the residue as calcium orthophosphate: Peak phosphorus Peak phosphorus is a concept to describe the point in time when humanity reaches the maximum global production rate of phosphorus as an industrial and commercial raw material. The term is used in an equivalent way to the better-known term peak oil. The issue was raised as a debate on whether phosphorus shortages might be imminent around 2010, which was largely dismissed after USGS and other organizations increased world estimates on available phosphorus resources, mostly in the form of additional resources in Morocco. However, exact reserve quantities remain uncertain, as do the possible impacts of increased phosphate use on future generations. This is important because rock phosphate is a key ingredient in many inorganic fertilizers. Hence, a shortage in rock phosphate (or just significant price increases) might negatively affect the world's food security. Phosphorus is a finite (limited) resource that is widespread in the Earth's crust and in living organisms but is relatively scarce in concentrated forms, which are not evenly distributed across the Earth. The only cost-effective production method to date is the mining of phosphate rock, but only a few countries have significant commercial reserves. The top five are Morocco (including reserves located in Western Sahara), China, Egypt, Algeria and Syria. Estimates for future production vary significantly depending on modelling and assumptions on extractable volumes, but it is inescapable that future production of phosphate rock will be heavily influenced by Morocco in the foreseeable future. Means of commercial phosphorus production besides mining are few because the phosphorus cycle does not include significant gas-phase transport. The predominant source of phosphorus in modern times is phosphate rock (as opposed to the guano that preceded it). According to some researchers, Earth's commercial and affordable phosphorus reserves are expected to be depleted in 50–100 years and peak phosphorus to be reached in approximately 2030. Others suggest that supplies will last for several hundreds of years. As with the timing of peak oil, the question is not settled, and researchers in different fields regularly publish different estimates of the rock phosphate reserves. Background The peak phosphorus concept is connected with the concept of planetary boundaries. Phosphorus, as part of biogeochemical processes, belongs to one of the nine "Earth system processes" which are known to have boundaries. As long as the boundaries are not crossed, they mark the "safe zone" for the planet. Estimates of world phosphate reserves The accurate determination of peak phosphorus is dependent on knowing the total world's commercial phosphate reserves and resources, especially in the form of phosphate rock (a summarizing term for over 300 ores of different origin, composition, and phosphate content). "Reserves" refers to the amount assumed recoverable at current market prices and "resources" refers to estimated amounts of such a grade or quality that they have reasonable prospects for economic extraction. Unprocessed phosphate rock has a concentration of 1.7–8.7% phosphorus by mass (4–20% phosphorus pentoxide). By comparison, the Earth's crust contains 0.1% phosphorus by mass, and vegetation 0.03–0.2%. Although quadrillions of tons of phosphorus exist in the Earth's crust, these are currently not economically extractable. In 2023, the United States Geological Survey (USGS) estimated that economically extractable phosphate rock reserves worldwide are 72 billion tons, while world mining production in 2022 was 220 million tons. Assuming zero growth, the reserves would thus last for around 300 years. This broadly confirms a 2010 International Fertilizer Development Center (IFDC) report that global reserves would last for several hundred years. Phosphorus reserve figures are intensely debated. Gilbert suggest that there has been little external verification of the estimate. A 2014 review concluded that the IFDC report "presents an inflated picture of global reserves, in particular those of Morocco, where largely hypothetical and inferred resources have simply been relabeled “reserves". The countries with most phosphate rock commercial reserves (in billion metric tons): Morocco 50, China 3.2, Egypt 2.8, Algeria 2.2, Syria 1.8, Brazil 1.6, Saudi Arabia 1.4, South Africa 1.4, Australia 1.1, United States 1.0, Finland 1.0, Russia 0.6, Jordan 0.8. Rock phosphate shortages (or just significant price increases) might negatively affect the world's food security. Many agricultural systems depend on supplies of inorganic fertilizer, which use rock phosphate. Under the food production regime in developed countries, shortages of rock phosphate could lead to shortages of inorganic fertilizer, which could in turn reduce the global food production. Economists have pointed out that price fluctuations of rock phosphate do not necessarily indicate peak phosphorus, as these have already occurred due to various demand- and supply-side factors. United States US production of phosphate rock peaked in 1980 at 54.4 million metric tons. The United States was the world's largest producer of phosphate rock from at least 1900, up until 2006, when US production was exceeded by that of China. In 2019, the US produced 10 percent of the world's phosphate rock. Exhaustion of guano reserves In 1609 Garcilaso de la Vega wrote the book Comentarios Reales in which he described many of the agricultural practices of the Incas prior to the arrival of the Spaniards and introduced the use of guano as a fertilizer. As Garcilaso described, the Incas near the coast harvested guano. In the early 1800s Alexander von Humboldt introduced guano as a source of agricultural fertilizer to Europe after having discovered it on islands off the coast of South America. It has been reported that, at the time of its discovery, the guano on some islands was over deep. The guano had previously been used by the Moche people as a source of fertilizer by mining it and transporting it back to Peru by boat. International commerce in guano did not start until after 1840. By the start of the 20th century guano had been nearly completely depleted and was eventually overtaken with the discovery of methods of production of superphosphate. Phosphorus conservation and recycling Overview Phosphorus can be transferred from the soil in one location to another as food is transported across the world, taking the phosphorus it contains with it. Once consumed by humans, it can end up in the local environment (in the case of open defecation which is still widespread on a global scale) or in rivers or the ocean via sewage systems and sewage treatment plants in the case of cities connected to sewer systems. An example of one crop that takes up large amounts of phosphorus is soy. In an effort to postpone the onset of peak phosphorus several methods of reducing and reusing phosphorus are in practice, such as in agriculture and in sanitation systems. The Soil Association, the UK organic agriculture certification and pressure group, issued a report in 2010 "A Rock and a Hard Place" encouraging more recycling of phosphorus. One potential solution to the shortage of phosphorus is greater recycling of human and animal wastes back into the environment. Agricultural practices Reducing agricultural runoff and soil erosion can slow the frequency with which farmers have to reapply phosphorus to their fields. Agricultural methods such as no-till farming, terracing, contour tilling, and the use of windbreaks have been shown to reduce the rate of phosphorus depletion from farmland. These methods are still dependent on a periodic application of phosphate rock to the soil and as such methods to recycle the lost phosphorus have also been proposed. Perennial vegetation, such as grassland or forest, is much more efficient in its use of phosphate than arable land. Strips of grassland and/or forest between arable land and rivers can greatly reduce losses of phosphate and other nutrients. Integrated farming systems which use animal sources to supply phosphorus for crops do exist at smaller scales, and application of the system to a larger scale is a potential alternative for supplying the nutrient, although it would require significant changes to the widely adopted modern crop fertilizing methods. Excreta reuse The oldest method of recycling phosphorus is through the reuse of animal manure and human excreta in agriculture. Via this method, phosphorus in the foods consumed are excreted, and the animal or human excreta are subsequently collected and re-applied to the fields. Although this method has maintained civilizations for centuries the current system of manure management is not logistically geared towards application to crop fields on a large scale. At present, manure application could not meet the phosphorus needs of large scale agriculture. Despite that, it is still an efficient method of recycling used phosphorus and returning it to the soil. There are concerns with pathogens in manure and human excreta, but those pathogens can be eliminated via suitable treatment. However, especially in the Global South these processes are not always followed, leading to outbreaks of diseases transmitted via the fecal–oral route such as cholera. Sewage sludge Sewage treatment plants that have an enhanced biological phosphorus removal step produce a sewage sludge that is rich in phosphorus. Various processes have been developed to extract phosphorus from sewage sludge directly, from the ash after incineration of the sewage sludge or from other products of sewage sludge treatment. This includes the extraction of phosphorus rich materials such as struvite from waste processing plants. The struvite can be made by adding magnesium to the waste. Some companies such as Ostara in Canada and NuReSys in Belgium are already using this technique to recover phosphate. Research on phosphorus recovery methods from sewage sludge has been carried out in Sweden and Germany since around 2003, but the technologies currently under development are not yet cost effective, given the current price of phosphorus on the world market. Neutron transmutation doping The above routes refer to "production" in the chemical sense i.e. extracting a desired element or compound from a source without changing the atoms themselves. However, there is a process which produces phosphorus in a nuclear sense in that atoms of another element are turned into phosphorus. While the amount of phosphorus produced this way is minuscule, it is nonetheless a crucial process in semiconductor production. Neutron transmutation doping (NTD) is an unusual doping method for special applications. Most commonly, it is used to dope silicon n-type in high-power electronics and semiconductor detectors. It is based on the conversion of the 30Si isotope into phosphorus atoms by neutron absorption and beta decay as follows: In practice, the silicon is typically placed near or inside a nuclear reactor (most commonly a research reactor e.g. the one at MIT) to receive the neutrons. As neutrons continue to pass through the silicon, more and more phosphorus atoms are produced by transmutation, and therefore the doping becomes more and more strongly n-type. NTD is a far less common doping method than diffusion or ion implantation, but it has the advantage of creating an extremely uniform dopant distribution. Applications Flame retardant Phosphorus compounds are used as flame retardants. Flame-retardant materials and coatings are being developed that are both phosphorus- and bio-based. Food additive Phosphorus is an essential mineral for humans listed in the Dietary Reference Intake (DRI). Food-grade phosphoric acid (additive E338) is used to acidify foods and beverages such as various colas and jams, providing a tangy or sour taste. The phosphoric acid also serves as a preservative. Soft drinks containing phosphoric acid, including Coca-Cola, are sometimes called phosphate sodas or phosphates. Phosphoric acid in soft drinks has the potential to cause dental erosion. Phosphoric acid also has the potential to contribute to the formation of kidney stones, especially in those who have had kidney stones previously. Fertiliser Phosphorus is an essential plant nutrient (the most often limiting nutrient, after nitrogen), and the bulk of all phosphorus production is in concentrated phosphoric acids for agriculture fertilisers, containing as much as 70% to 75% P2O5. That led to large increase in phosphate (PO43−) production in the second half of the 20th century. Artificial phosphate fertilisation is necessary because phosphorus is essential to all living organisms; it is involved in energy transfers, strength of root and stems, photosynthesis, the expansion of plant roots, formation of seeds and flowers, and other important factors effecting overall plant health and genetics. Heavy use of phosphorus fertilizers and their runoff have resulted in eutrophication (overenrichment) of aquatic ecosystems. Natural phosphorus-bearing compounds are mostly inaccessible to plants because of the low solubility and mobility in soil. Most phosphorus is very stable in the soil minerals or organic matter of the soil. Even when phosphorus is added in manure or fertilizer it can become fixed in the soil. Therefore, the natural phosphorus cycle is very slow. Some of the fixed phosphorus is released again over time, sustaining wild plant growth, however, more is needed to sustain intensive cultivation of crops. Fertiliser is often in the form of superphosphate of lime, a mixture of calcium dihydrogen phosphate (Ca(H2PO4)2), and calcium sulfate dihydrate (CaSO4·2H2O) produced reacting sulfuric acid and water with calcium phosphate. Processing phosphate minerals with sulfuric acid for obtaining fertiliser is so important to the global economy that this is the primary industrial market for sulfuric acid and the greatest industrial use of elemental sulfur. Organophosphorus White phosphorus is widely used to make organophosphorus compounds through intermediate phosphorus chlorides and two phosphorus sulfides, phosphorus pentasulfide and phosphorus sesquisulfide. Organophosphorus compounds have many applications, including in plasticisers, flame retardants, pesticides, extraction agents, nerve agents and water treatment. Metallurgical aspects Phosphorus is also an important component in steel production, in the making of phosphor bronze, and in many other related products. Phosphorus is added to metallic copper during its smelting process to react with oxygen present as an impurity in copper and to produce phosphorus-containing copper (CuOFP) alloys with a higher hydrogen embrittlement resistance than normal copper. Phosphate conversion coating is a chemical treatment applied to steel parts to improve their corrosion resistance. Matches The first striking match with a phosphorus head was invented by Charles Sauria in 1830. These matches (and subsequent modifications) were made with heads of white phosphorus, an oxygen-releasing compound (potassium chlorate, lead dioxide, or sometimes nitrate), and a binder. They were poisonous to the workers in manufacture, sensitive to storage conditions, toxic if ingested, and hazardous when accidentally ignited on a rough surface. Production in several countries was banned between 1872 and 1925. The international Berne Convention, ratified in 1906, prohibited the use of white phosphorus in matches. In consequence, phosphorous matches were gradually replaced by safer alternatives. Around 1900 French chemists Henri Sévène and Emile David Cahen invented the modern strike-anywhere match, wherein the white phosphorus was replaced by phosphorus sesquisulfide (P4S3), a non-toxic and non-pyrophoric compound that ignites under friction. For a time these safer strike-anywhere matches were quite popular but in the long run they were superseded by the modern safety match. Safety matches are very difficult to ignite on any surface other than a special striker strip. The strip contains non-toxic red phosphorus and the match head potassium chlorate, an oxygen-releasing compound. When struck, small amounts of abrasion from match head and striker strip are mixed intimately to make a small quantity of Armstrong's mixture, a very touch sensitive composition. The fine powder ignites immediately and provides the initial spark to set off the match head. Safety matches separate the two components of the ignition mixture until the match is struck. This is the key safety advantage as it prevents accidental ignition. Nonetheless, safety matches, invented in 1844 by Gustaf Erik Pasch and market ready by the 1860s, did not gain consumer acceptance until the prohibition of white phosphorus. Using a dedicated striker strip was considered clumsy. Water softening Sodium tripolyphosphate made from phosphoric acid is used in laundry detergents in some countries, but banned for this use in others. This compound softens the water to enhance the performance of the detergents and to prevent pipe/boiler tube corrosion. Miscellaneous Phosphates are used to make special glasses for sodium lamps. Bone-ash (mostly calcium phosphate) is used in the production of fine china. Phosphoric acid made from elemental phosphorus is used in food applications such as soft drinks, and as a starting point for food grade phosphates. These include monocalcium phosphate for baking powder and sodium tripolyphosphate. Phosphates are used to improve the characteristics of processed meat and cheese, and in toothpaste. White phosphorus, called "WP" (slang term "Willie Peter") is used in military applications as incendiary bombs, for smoke-screening as smoke pots and smoke bombs, and in tracer ammunition. It is also a part of an obsolete M34 White Phosphorus US hand grenade. This multipurpose grenade was mostly used for signaling, smoke screens, and inflammation; it could also cause severe burns and had a psychological impact on the enemy. Military uses of white phosphorus are constrained by international law. 32P and 33P are used as radioactive tracers in biochemical laboratories. Phosphorus is a dopant in n-type semiconductors Biological role Inorganic phosphorus in the form of the phosphate is required for all known forms of life. Phosphorus plays a major role in the structural framework of DNA and RNA. Living cells use phosphate to transport cellular energy with adenosine triphosphate (ATP), necessary for every cellular process that uses energy. ATP is also important for phosphorylation, a key regulatory event in cells. Phospholipids are the main structural components of all cellular membranes. Calcium phosphate salts assist in stiffening bones. Biochemists commonly use the abbreviation "P" to refer to inorganic phosphate. Every living cell is encased in a membrane that separates it from its surroundings. Cellular membranes are composed of a phospholipid matrix and proteins, typically in the form of a bilayer. Phospholipids are derived from glycerol with two of the glycerol hydroxyl (OH) protons replaced by fatty acids as an ester, and the third hydroxyl proton has been replaced with phosphate bonded to another alcohol. An average adult human contains about of phosphorus, about 85–90% in bones and teeth in the form of apatite, and the remainder in soft tissues and extracellular fluids. The phosphorus content increases from about 0.5% by mass in infancy to 0.65–1.1% by mass in adults. Average phosphorus concentration in the blood is about 0.4 g/L; about 70% of that is organic and 30% inorganic phosphates. An adult with healthy diet consumes and excretes about 1–3 grams of phosphorus per day, with consumption in the form of inorganic phosphate and phosphorus-containing biomolecules such as nucleic acids and phospholipids; and excretion almost exclusively in the form of phosphate ions such as and . Only about 0.1% of body phosphate circulates in the blood, paralleling the amount of phosphate available to soft tissue cells. Bone and teeth enamel The main component of bone is hydroxyapatite as well as amorphous forms of calcium phosphate, possibly including carbonate. Hydroxyapatite is the main component of tooth enamel. Water fluoridation enhances the resistance of teeth to decay by the partial conversion of this mineral to the still harder material fluorapatite: + → + Phosphorus deficiency In medicine, phosphate deficiency syndrome may be caused by malnutrition, by failure to absorb phosphate, and by metabolic syndromes that draw phosphate from the blood (such as in refeeding syndrome after malnutrition) or passing too much of it into the urine. All are characterised by hypophosphatemia, which is a condition of low levels of soluble phosphate levels in the blood serum and inside the cells. Symptoms of hypophosphatemia include neurological dysfunction and disruption of muscle and blood cells due to lack of ATP. Too much phosphate can lead to diarrhoea and calcification (hardening) of organs and soft tissue, and can interfere with the body's ability to use iron, calcium, magnesium, and zinc. Phosphorus is an essential macromineral for plants, which is studied extensively in edaphology to understand plant uptake from soil systems. Phosphorus is a limiting factor in many ecosystems; that is, the scarcity of phosphorus limits the rate of organism growth. An excess of phosphorus can also be problematic, especially in aquatic systems where eutrophication sometimes leads to algal blooms. Nutrition Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for phosphorus in 1997. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The current EAR for phosphorus for people ages 19 and up is 580 mg/day. The RDA is 700 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy and lactation are also 700 mg/day. For people ages 1–18 years, the RDA increases with age from 460 to 1250 mg/day. As for safety, the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of phosphorus, the UL is 4000 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 15 and older, including pregnancy and lactation, the AI is set at 550 mg/day. For children ages 4–10, the AI is 440 mg/day, and for ages 11–17 it is 640 mg/day. These AIs are lower than the U.S. RDAs. In both systems, teenagers need more than adults. EFSA reviewed the same safety question and decided that there was not sufficient information to set a UL. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For phosphorus labeling purposes, 100% of the Daily Value was 1000 mg, but as of May 27, 2016, it was revised to 1250 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. Food sources The main food sources for phosphorus are the same as those containing protein, although proteins do not contain phosphorus. For example, milk, meat, and soya typically also have phosphorus. As a rule, if a diet has sufficient protein and calcium, the amount of phosphorus is probably sufficient. Precautions Organic compounds of phosphorus form a broad class of materials; many are required for life, but some are highly toxic. Fluorophosphate esters are among the most potent neurotoxins known. A wide range of organophosphorus compounds are used for their toxicity as pesticides (herbicides, insecticides, fungicides, etc.) and weaponised as nerve agents against enemy humans. Most inorganic phosphates are relatively nontoxic and essential nutrients. The white phosphorus allotrope presents a significant hazard because it ignites in the air and produces phosphoric acid residue. Chronic white phosphorus poisoning leads to necrosis of the jaw called "phossy jaw". White phosphorus is toxic, causing severe liver damage on ingestion and may cause a condition known as "Smoking Stool Syndrome". In the past, external exposure to elemental phosphorus was treated by washing the affected area with 2% copper(II) sulfate solution to form harmless compounds that are then washed away. According to the recent US Navy's Treatment of Chemical Agent Casualties and Conventional Military Chemical Injuries: FM8-285: Part 2 Conventional Military Chemical Injuries, "Cupric (copper(II)) sulfate has been used by U.S. personnel in the past and is still being used by some nations. However, copper sulfate is toxic and its use will be discontinued. Copper sulfate may produce kidney and cerebral toxicity as well as intravascular hemolysis." The manual suggests instead "a bicarbonate solution to neutralise phosphoric acid, which will then allow removal of visible white phosphorus. Particles often can be located by their emission of smoke when air strikes them, or by their phosphorescence in the dark. In dark surroundings, fragments are seen as luminescent spots. Promptly debride the burn if the patient's condition will permit removal of bits of WP (white phosphorus) that might be absorbed later and possibly produce systemic poisoning. DO NOT apply oily-based ointments until it is certain that all WP has been removed. Following complete removal of the particles, treat the lesions as thermal burns." As white phosphorus readily mixes with oils, any oily substances or ointments are not recommended until the area is thoroughly cleaned and all white phosphorus removed. In the workplace, people can be exposed to phosphorus by inhalation, ingestion, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the phosphorus exposure limit (Permissible exposure limit) in the workplace at 0.1 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 0.1 mg/m3 over an 8-hour workday. At levels of 5 mg/m3, phosphorus is immediately dangerous to life and health. US DEA List I status Phosphorus can reduce elemental iodine to hydroiodic acid, which is a reagent effective for reducing ephedrine or pseudoephedrine to methamphetamine. For this reason, red and white phosphorus were designated by the United States Drug Enforcement Administration as List I precursor chemicals under 21 CFR 1310.02 effective on November 17, 2001. In the United States, handlers of red or white phosphorus are subject to stringent regulatory controls.
Physical sciences
Chemical elements_2
null
23319
https://en.wikipedia.org/wiki/Palladium
Palladium
Palladium is a chemical element; it has symbol Pd and atomic number 46. It is a rare and lustrous silvery-white metal discovered in 1802 by the English chemist William Hyde Wollaston. He named it after the asteroid Pallas (formally 2 Pallas), which was itself named after the epithet of the Greek goddess Athena, acquired by her when she slew Pallas. Palladium, platinum, rhodium, ruthenium, iridium and osmium form a group of elements referred to as the platinum group metals (PGMs). They have similar chemical properties, but palladium has the lowest melting point and is the least dense of them. More than half the supply of palladium and its congener platinum is used in catalytic converters, which convert as much as 90% of the harmful gases in automobile exhaust (hydrocarbons, carbon monoxide, and nitrogen dioxide) into nontoxic substances (nitrogen, carbon dioxide and water vapor). Palladium is also used in electronics, dentistry, medicine, hydrogen purification, chemical applications, groundwater treatment, and jewelry. Palladium is a key component of fuel cells, in which hydrogen and oxygen react to produce electricity, heat, and water. Ore deposits of palladium and other PGMs are rare. The most extensive deposits have been found in the norite belt of the Bushveld Igneous Complex covering the Transvaal Basin in South Africa; the Stillwater Complex in Montana, United States; the Sudbury Basin and Thunder Bay District of Ontario, Canada; and the Norilsk Complex in Russia. Recycling is also a source, mostly from scrapped catalytic converters. The numerous applications and limited supply sources result in considerable investment interest. Characteristics Palladium belongs to group 10 in the periodic table, but the configuration in the outermost electrons is in accordance with Hund's rule. Electrons that by the Madelung rule would be expected to occupy the 5s instead fill the 4d orbitals, as it is more energetically favorable to have a completely filled 4d10 shell instead of the 5s2 4d8 configuration. This 5s0 configuration, unique in period 5, makes palladium the heaviest element having only one incomplete electron shell, with all shells above it empty. Palladium has the appearance of a soft silver-white metal that resembles platinum. It is the least dense and has the lowest melting point of the platinum group metals. It is soft and ductile when annealed and is greatly increased in strength and hardness when cold-worked. Palladium dissolves slowly in concentrated nitric acid, in hot, concentrated sulfuric acid, and when finely ground, in hydrochloric acid. It dissolves readily at room temperature in aqua regia. Palladium does not react with oxygen at standard temperature (and thus does not tarnish in air). Palladium heated to 800 °C will produce a layer of palladium(II) oxide (PdO). It may slowly develop a slight brownish coloration over time, likely due to the formation of a surface layer of its monoxide. Palladium films with defects produced by alpha particle bombardment at low temperature exhibit superconductivity having Tc = 3.2 K. Isotopes Naturally occurring palladium is composed of seven isotopes, six of which are stable. The most stable radioisotopes are 107Pd with a half-life of 6.5 million years (found in nature), 103Pd with 17 days, and 100Pd with 3.63 days. Eighteen other radioisotopes have been characterized with atomic weights ranging from 90.94948(64) u (91Pd) to 122.93426(64) u (123Pd). These have half-lives of less than thirty minutes, except 101Pd (half-life: 8.47 hours), 109Pd (half-life: 13.7 hours), and 112Pd (half-life: 21 hours). For isotopes with atomic mass unit values less than that of the most abundant stable isotope, 106Pd, the primary decay mode is electron capture with the primary decay product being rhodium. The primary mode of decay for those isotopes of Pd with atomic mass greater than 106 is beta decay with the primary product of this decay being silver. Radiogenic 107Ag is a decay product of 107Pd and was first discovered in 1978 in the Santa Clara meteorite of 1976. The discoverers suggest that the coalescence and differentiation of iron-cored small planets may have occurred 10 million years after a nucleosynthetic event. 107Pd versus Ag correlations observed in bodies, which have been melted since accretion of the Solar System, must reflect the presence of short-lived nuclides in the early Solar System. is also produced as a fission product in spontaneous or induced fission of . As it is not very mobile in the environment and has a relatively low decay energy, is usually considered to be among the less concerning of the long-lived fission products. Compounds Palladium compounds exist primarily in the 0 and +2 oxidation state. Other less common states are also recognized. Generally the compounds of palladium are more similar to those of platinum than those of any other element. Palladium(II) Palladium(II) chloride is the principal starting material for other palladium compounds. It arises by the reaction of palladium with chlorine. It is used to prepare heterogeneous palladium catalysts such as palladium on barium sulfate, palladium on carbon, and palladium chloride on carbon. Solutions of PdCl2 in nitric acid react with acetic acid to give palladium(II) acetate, also a versatile reagent. PdCl2 reacts with ligands (L) to give square planar complexes of the type PdCl2L2. One example of such complexes is the benzonitrile derivative PdCl2(PhCN)2. The complex bis(triphenylphosphine)palladium(II) dichloride is a useful catalyst. Palladium(0) Palladium forms a range of zerovalent complexes with the formula PdL4, PdL3 and PdL2. For example, reduction of a mixture of PdCl2(PPh3)2 and PPh3 gives tetrakis(triphenylphosphine)palladium(0): Another major palladium(0) complex, tris(dibenzylideneacetone)dipalladium(0) (Pd2(dba)3), is prepared by reducing sodium tetrachloropalladate in the presence of dibenzylideneacetone. Palladium(0), as well as palladium(II), are catalysts in coupling reactions, as has been recognized by the 2010 Nobel Prize in Chemistry to Richard F. Heck, Ei-ichi Negishi, and Akira Suzuki. Such reactions are widely practiced for the synthesis of fine chemicals. Prominent coupling reactions include the Heck, Suzuki, Sonogashira coupling, Stille reactions, and the Kumada coupling. Palladium(II) acetate, tetrakis(triphenylphosphine)palladium(0) (Pd(PPh3)4), and tris(dibenzylideneacetone)dipalladium(0) (Pd2(dba)3) serve either as catalysts or precatalysts. Other oxidation states Although Pd(IV) compounds are comparatively rare, one example is sodium hexachloropalladate(IV), Na2[PdCl6]. A few compounds of palladium(III) are also known. Palladium(VI) was claimed in 2002, but subsequently disproven. Mixed valence palladium complexes exist, e.g. Pd4(CO)4(OAc)4Pd(acac)2 forms an infinite Pd chain structure, with alternatively interconnected Pd4(CO)4(OAc)4 and Pd(acac)2 units. When alloyed with a more electropositive element, palladium can acquire a negative charge. Such compounds are known as palladides, such as gallium palladide. Palladides with the stoichiometry RPd3 exist where R is scandium, yttrium, or any of the lanthanides. Occurrence As overall mine production of palladium reached 210,000 kilograms in 2022, Russia was the top producer with 88,000 kilograms, followed by South Africa, Canada, the U.S., and Zimbabwe. Russia's company Norilsk Nickel ranks first among the largest palladium producers globally, accounting for 39% of the world's production. Palladium can be found as a free metal alloyed with gold and other platinum-group metals in placer deposits of the Ural Mountains, Australia, Ethiopia, North and South America. For the production of palladium, these deposits play only a minor role. The most important commercial sources are nickel-copper deposits found in the Sudbury Basin, Ontario, and the Norilsk–Talnakh deposits in Siberia. The other large deposit is the Merensky Reef platinum group metals deposit within the Bushveld Igneous Complex South Africa. The Stillwater igneous complex of Montana and the Roby zone ore body of the Lac des Îles igneous complex of Ontario are the two other sources of palladium in Canada and the United States. Palladium is found in the rare minerals cooperite and polarite. Many more Pd minerals are known, but all of them are very rare. Palladium is also produced in nuclear fission reactors and can be extracted from spent nuclear fuel (see synthesis of precious metals), though this source for palladium is not used. None of the existing nuclear reprocessing facilities are equipped to extract palladium from the high-level radioactive waste. A complication for the recovery of palladium in spent fuel is the presence of , a slightly radioactive long-lived fission product. Depending on end use, the radioactivity contributed by the might make the recovered palladium unusable without a costly step of isotope separation. Applications The largest use of palladium today is in catalytic converters. Palladium is also used in jewelry, dentistry, watch making, blood sugar test strips, aircraft spark plugs, surgical instruments, and electrical contacts. Palladium is also used to make some professional transverse (concert or classical) flutes. As a commodity, palladium bullion has ISO currency codes of XPD and 964. Palladium is one of only four metals to have such codes, the others being gold, silver and platinum. Because it adsorbs hydrogen, palladium was a key component of the controversial cold fusion experiments of the late 1980s. Catalysis When it is finely divided, as with palladium on carbon, palladium forms a versatile catalyst; it speeds heterogeneous catalytic processes like hydrogenation, dehydrogenation, and petroleum cracking. Palladium is also essential to the Lindlar catalyst, also called Lindlar's Palladium. A large number of carbon–carbon bonding reactions in organic chemistry are facilitated by palladium compound catalysts. For example: Heck reaction Suzuki coupling Tsuji-Trost reactions Wacker process Negishi reaction Stille coupling Sonogashira coupling When dispersed on conductive materials, palladium is an excellent electrocatalyst for oxidation of primary alcohols in alkaline media. Palladium is also a versatile metal for homogeneous catalysis, used in combination with a broad variety of ligands for highly selective chemical transformations. In 2010 the Nobel Prize in Chemistry was awarded "for palladium-catalyzed cross couplings in organic synthesis" to Richard F. Heck, Ei-ichi Negishi and Akira Suzuki. A 2008 study showed that palladium is an effective catalyst for carbon–fluorine bonds. Palladium catalysis is primarily employed in organic chemistry and industrial applications, although its use is growing as a tool for synthetic biology; in 2017, effective in vivo catalytic activity of palladium nanoparticles was demonstrated in mammals to treat disease. Electronics The primary application of palladium in electronics is in multi-layer ceramic capacitors in which palladium (and palladium-silver alloy) is used for electrodes. Palladium (sometimes alloyed with nickel) is or can be used for component and connector plating in consumer electronics and in soldering materials. The electronic sector consumed of palladium in 2006, according to a Johnson Matthey report. Technology Hydrogen easily diffuses through heated palladium, and membrane reactors with Pd membranes are used in the production of high purity hydrogen. Palladium is used in palladium-hydrogen electrodes in electrochemical studies. Palladium(II) chloride readily catalyzes carbon monoxide gas to carbon dioxide and is useful in carbon monoxide detectors. Hydrogen storage Palladium readily adsorbs hydrogen at room temperatures, forming palladium hydride PdHx with x less than 1. While this property is common to many transition metals, palladium has a uniquely high absorption capacity and does not lose its ductility until x approaches 1. This property has been investigated in designing an efficient and safe hydrogen fuel storage medium, though palladium itself is currently prohibitively expensive for this purpose. The content of hydrogen in palladium can be linked to magnetic susceptibility, which decreases with the increase of hydrogen and becomes zero for PdH0.62. At any higher ratio, the solid solution becomes diamagnetic. Palladium is used for purification of hydrogen on a laboratory but not industrial scale. Dentistry Palladium is used in small amounts (about 0.5%) in some alloys of dental amalgam to decrease corrosion and increase the metallic lustre of the final restoration. Jewelry Palladium has been used as a precious metal in jewelry since 1939 as an alternative to platinum in the alloys called "white gold", where the naturally white color of palladium does not require rhodium plating. Palladium, being much less dense than platinum, is similar to gold in that it can be beaten into leaf as thin as 100 nm ( in). Unlike platinum, palladium may discolor at temperatures above due to oxidation, making it more brittle and thus less suitable for use in jewelry; to prevent this, palladium intended for jewelry is heated under controlled conditions. Prior to 2004, the principal use of palladium in jewelry was the manufacture of white gold. Palladium is one of the three most popular alloying metals in white gold (nickel and silver can also be used). Palladium-gold is more expensive than nickel-gold, but seldom causes allergic reactions (though certain cross-allergies with nickel may occur). When platinum became a strategic resource during World War II, many jewelry bands were made out of palladium. Palladium was little used in jewelry because of the technical difficulty of casting. With the casting problem resolved the use of palladium in jewelry increased, originally because platinum increased in price while the price of palladium decreased. In early 2004, when gold and platinum prices rose steeply, China began fabricating volumes of palladium jewelry, consuming 37 tonnes in 2005. Subsequent changes in the relative price of platinum lowered demand for palladium to 17.4 tonnes in 2009. Demand for palladium as a catalyst has increased the price of palladium to about 50% higher than that of platinum in January 2019. In January 2010, hallmarks for palladium were introduced by assay offices in the United Kingdom, and hallmarking became mandatory for all jewelry advertising pure or alloyed palladium. Articles can be marked as 500, 950, or 999 parts of palladium per thousand of the alloy. Fountain pen nibs made from gold are sometimes plated with palladium when a silver (rather than gold) appearance is desired. Sheaffer has used palladium plating for decades, either as an accent on otherwise gold nibs or covering the gold completely. Palladium is also used by the luxury brand Hermès as one of the metals plating the hardware on their handbags, most famous of which being Birkin. Photography In the platinotype printing process, photographers make fine-art black-and-white prints using platinum or palladium salts. Often used with platinum, palladium provides an alternative to silver. Effects on health Toxicity Palladium is a metal with low toxicity as conventionally measured (e.g. LD50). Recent research on the mechanism of palladium toxicity suggests high toxicity if measured on a longer timeframe and at the cellular level in the liver and kidney. Mitochondria appear to have a key role in palladium toxicity via mitochondrial membrane potential collapse and depletion of the cellular glutathione (GSH) level. Until that recent work, it had been thought that palladium was poorly absorbed by the human body when ingested. Plants such as the water hyacinth are killed by low levels of palladium salts, but most other plants tolerate it, although tests show that, at levels above 0.0003%, growth is affected. High doses of palladium could be poisonous; tests on rodents suggest it may be carcinogenic, though until the recent research cited above, no clear evidence indicated that the element harms humans. Precautions Like other platinum-group metals, bulk Pd is quite inert. Although contact dermatitis has been reported, data on the effects are limited. It has been shown that people with an allergic reaction to palladium also react to nickel, making it advisable to avoid the use of dental alloys containing palladium on those so allergic. Some palladium is emitted with the exhaust gases of cars with catalytic converters. Between 4 and 108 ng/km of palladium particulate is released by such cars, while the total uptake from food is estimated to be less than 2 μg per person a day. The second possible source of palladium is dental restoration, from which the uptake of palladium is estimated to be less than 15 μg per person per day. People working with palladium or its compounds might have a considerably greater uptake. For soluble compounds such as palladium chloride, 99% is eliminated from the body within three days. The median lethal dose (LD50) of soluble palladium compounds in mice is 200 mg/kg for oral and 5 mg/kg for intravenous administration. History William Hyde Wollaston noted the discovery of a new noble metal in July 1802 in his lab book and named it palladium in August of the same year. He named the element after the asteroid 2 Pallas, which had been discovered two months earlier (and which was previously considered a planet). Wollaston purified a quantity of the material and offered it, without naming the discoverer, in a small shop in Soho in April 1803. After harsh criticism from Richard Chenevix, who claimed that palladium was an alloy of platinum and mercury, Wollaston anonymously offered a reward of £20 for 20 grains of synthetic palladium alloy. Chenevix received the Copley Medal in 1803 after he published his experiments on palladium. Wollaston published the discovery of rhodium in 1804 and mentions some of his work on palladium. He disclosed that he was the discoverer of palladium in a publication in 1805. Wollaston found palladium in crude platinum ore from South America by dissolving the ore in aqua regia, neutralizing the solution with sodium hydroxide, and precipitating platinum as ammonium chloroplatinate with ammonium chloride. He added mercuric cyanide to form the compound palladium(II) cyanide, which was heated to extract palladium metal. Palladium chloride was at one time prescribed as a tuberculosis treatment at the rate of 0.065 g per day (approximately one milligram per kilogram of body weight). This treatment had many negative side-effects, and was later replaced by more effective drugs. Most palladium is used for catalytic converters in the automobile industry. Catalytic converters are targets for thieves because they contain palladium and other rare metals. In the run up to year 2000, the Russian supply of palladium to the global market was repeatedly delayed and disrupted; for political reasons, the export quota was not granted on time. The ensuing market panic drove the price to an all-time high of in January 2001. Around that time, the Ford Motor Company, fearing that automobile production would be disrupted by a palladium shortage, stockpiled the metal. When prices fell in early 2001, Ford lost nearly US$1 billion. World demand for palladium increased from 100 tons in 1990 to nearly 300 tons in 2000. The global production of palladium from mines was 222 tonnes in 2006 according to the United States Geological Survey. Many were concerned about a steady supply of palladium in the wake of Russia's annexation of Crimea, partly as sanctions could hamper Russian palladium exports; any restrictions on Russian palladium exports could have exacerbated what was already expected to be a large palladium deficit in 2014. Those concerns pushed palladium prices to their highest level since 2001. In September 2014 they soared above the $900 per ounce mark. In 2016 however palladium cost around $614 per ounce as Russia managed to maintain stable supplies. In January 2019 palladium futures climbed past $1,344 per ounce for the first time on record, mainly due to the strong demand from the automotive industry. Palladium reached on 6 January 2020, passing $2,000 per troy ounce the first time. The price rose above $3,000 per troy ounce in May 2021 and March 2022. Palladium as investment Global palladium sales were in 2017, of which 86% was used in the manufacturing of automotive catalytic converters, followed by industrial, jewelry, and investment usages. More than 75% of global platinum and 40% of palladium are mined in South Africa. Russia's mining company, Norilsk Nickel, produces another 44% of palladium, with US and Canada-based mines producing most of the rest. The price for palladium reached an all-time high of $2,981.40 per ounce on May 3, 2021, driven mainly on speculation of the catalytic converter demand from the automobile industry. Palladium is traded in the spot market with the code "XPD". When settled in USD, the code is "XPDUSD". A later surplus of the metal was caused by the Russian government selling stockpiles from the Soviet era, at a rate of about a year. The amount and status of this stockpile are a state secret. During the Russo-Ukrainian War in March 2022, prices for palladium increased 13%, since the first of March. Russia is the primary supplier to Europe and the country supplies 37% of the global production. Palladium producers Norilsk Nickel Sibanye-Stillwater Anglo American Platinum Impala Platinum Northam Platinum Exchange-traded products WisdomTree Physical Palladium () is backed by allocated palladium bullion and was the world's first palladium ETF. It is listed on the London Stock Exchange as PHPD, Xetra Trading System, Euronext and Milan. ETFS Physical Palladium Shares () is an ETF traded on the New York Stock Exchange. Bullion coins and bars A traditional way of investing in palladium is buying bullion coins and bars made of palladium. Available palladium coins include the Canadian Palladium Maple Leaf, the Chinese Panda, and the American Palladium Eagle. The liquidity of direct palladium bullion investment is poorer than that of gold and silver because there is low circulation of palladium coins.
Physical sciences
Chemical elements_2
null
23321
https://en.wikipedia.org/wiki/Promethium
Promethium
Promethium is a chemical element with symbol Pm and atomic number 61. All of its isotopes are radioactive; it is extremely rare, with only about 500–600 grams naturally occurring in the Earth's crust at any given time. Promethium is one of the only two radioactive elements that are both preceded and followed in the periodic table by elements with stable forms, the other being technetium. Chemically, promethium is a lanthanide. Promethium shows only one stable oxidation state of +3. In 1902 Bohuslav Brauner suggested that there was a then-unknown element with properties intermediate between those of the known elements neodymium (60) and samarium (62); this was confirmed in 1914 by Henry Moseley, who, having measured the atomic numbers of all the elements then known, found that the element with atomic number 61 was missing. In 1926, two groups (one Italian and one American) claimed to have isolated a sample of element 61; both "discoveries" were soon proven to be false. In 1938, during a nuclear experiment conducted at Ohio State University, a few radioactive nuclides were produced that certainly were not radioisotopes of neodymium or samarium, but there was a lack of chemical proof that element 61 was produced, and the discovery was not much recognized. Promethium was first produced and characterized at Oak Ridge National Laboratory in 1945 by the separation and analysis of the fission products of uranium fuel irradiated in a graphite reactor. The discoverers proposed the name "prometheum" (the spelling was subsequently changed), derived from Prometheus, the Titan in Greek mythology who stole fire from Mount Olympus and brought it down to humans, to symbolize "both the daring and the possible misuse of mankind's intellect". A sample of the metal was made only in 1963. The two sources of natural promethium are rare alpha decays of natural europium-151 (producing promethium-147) and spontaneous fission of uranium (various isotopes). Promethium-145 is the most stable promethium isotope, but the only isotope with practical applications is promethium-147, chemical compounds of which are used in luminous paint, atomic batteries and thickness-measurement devices. Because natural promethium is exceedingly scarce, it is typically synthesized by bombarding uranium-235 (enriched uranium) with thermal neutrons to produce promethium-147 as a fission product. Properties Physical properties A promethium atom has 61 electrons, arranged in the configuration [Xe] 4f5 6s2. The seven 4f and 6s electrons are valence electrons. In forming compounds, the atom loses its two outermost electrons and one 4f-electron, which belongs to an open subshell. The element's atomic radius is the second largest among all the lanthanides but is only slightly greater than those of the neighboring elements. It is the most notable exception to the general trend of the contraction of lanthanide atoms with the increase of their atomic numbers (lanthanide contraction). Many properties of promethium rely on its position among lanthanides and are intermediate between those of neodymium and samarium. For example, the melting point, the first three ionization energies, and the hydration energy are greater than those of neodymium and lower than those of samarium; similarly, the estimate for the boiling point, ionic (Pm3+) radius, and standard heat of formation of monatomic gas are greater than those of samarium and less than those of neodymium. Promethium has a double hexagonal close packed (dhcp) structure and a hardness of 63 kg/mm2. This low-temperature alpha form converts into a beta, body-centered cubic (bcc) phase upon heating to 890 °C. Chemical properties and compounds Promethium belongs to the cerium group of lanthanides and is chemically very similar to the neighboring elements. Because of its instability, chemical studies of promethium are incomplete. Even though a few compounds have been synthesized, they are not fully studied; in general, they tend to be pink or red in color. In May 2024, a promethium coordination complex with neutral PyDGA ligands was characterized in aqueous solution. Treatment of acidic solutions containing ions with ammonia results in a gelatinous light-brown sediment of hydroxide, , which is insoluble in water. When dissolved in hydrochloric acid, a water-soluble yellow salt, , is produced; similarly, when dissolved in nitric acid, a nitrate results, . The latter is also well-soluble; when dried, it forms pink crystals, similar to . The electron configuration for is [Xe] 4f4, and the color of the ion is pink. The ground state term symbol is 5I4. The sulfate is slightly soluble, like the other cerium group sulfates. Cell parameters have been calculated for its octahydrate; they lead to conclusion that the density of is 2.86 g/cm3. The oxalate, , has the lowest solubility of all lanthanide oxalates. Unlike the nitrate, the oxide is similar to the corresponding samarium salt and not the neodymium salt. As-synthesized, e.g. by heating the oxalate, it is a white or lavender-colored powder with disordered structure. This powder crystallizes in a cubic lattice upon heating to 600 °C. Further annealing at 800 °C and then at 1750 °C irreversibly transforms it to monoclinic and hexagonal phases, respectively, and the last two phases can be interconverted by adjusting the annealing time and temperature. Promethium forms only one stable oxidation state, +3, in the form of ions; this is in line with other lanthanides. Promethium can also form the +2 oxidation state. Thermodynamic properties of Pm2+ suggests that the dihalides are stable, similar to NdCl2 and SmCl2. Isotopes Promethium is the only lanthanide and one of only two elements among the first 82 with no stable or long-lived (primordial) isotopes. This is a result of a rarely occurring effect of the liquid drop model and stabilities of neighbor element isotopes; it is also the least stable element of the first 84. The primary decay products are neodymium and samarium isotopes (promethium-146 decays to both, the lighter isotopes generally to neodymium via positron decay and electron capture, and the heavier isotopes to samarium via beta decay). Promethium nuclear isomers may decay to other promethium isotopes and one isotope (145Pm) has a very rare alpha decay mode to stable praseodymium-141. The most stable isotope of the element is promethium-145, which has a specific activity of and a half-life of 17.7 years via electron capture. Because it has 84 neutrons (two more than 82, which is a magic number which corresponds to a stable neutron configuration), it may emit an alpha particle (which has 2 neutrons) to form praseodymium-141 with 82 neutrons. Thus it is the only promethium isotope with an experimentally observed alpha decay. Its partial half-life for alpha decay is about 6.3 years, and the relative probability for a 145Pm nucleus to decay in this way is 2.8 %. Several other promethium isotopes such as 144Pm, 146Pm, and 147Pm also have a positive energy release for alpha decay; their alpha decays are predicted to occur but have not been observed. In total, 41 isotopes of promethium are known, ranging from 126Pm to 166Pm. The element also has 18 nuclear isomers, with mass numbers of 133 to 142, 144, 148, 149, 152, and 154 (some mass numbers have more than one isomer). The most stable of them is promethium-148m, with a half-life of 43.1 days; this is longer than the half-lives of the ground states of all promethium isotopes, except for promethium-143 to 147. In fact, promethium-148m has a longer half-life than its ground state, promethium-148. Occurrence In 1934, Willard Libby reported that he had found weak beta activity in pure neodymium, which was attributed to a half-life over 1012 years. Almost 20 years later, it was claimed that the element occurs in natural neodymium in equilibrium in quantities below 10−20 grams of promethium per one gram of neodymium. However, these observations were disproved by newer investigations, because for all seven naturally occurring neodymium isotopes, any single beta decays (which can produce promethium isotopes) are forbidden by energy conservation. In particular, careful measurements of atomic masses show that the mass difference between 150Nd and 150Pm is negative (−87 keV), which absolutely prevents the single beta decay of 150Nd to 150Pm. In 1965, Olavi Erämetsä separated out traces of 147Pm from a rare earth concentrate purified from apatite, resulting in an upper limit of 10−21 for the abundance of promethium in nature; this may have been produced by the natural nuclear fission of uranium, or by cosmic ray spallation of 146Nd. Both isotopes of natural europium have larger mass excesses than sums of those of their potential alpha daughters plus that of an alpha particle; therefore, they (stable in practice) may alpha decay to promethium. Research at Laboratori Nazionali del Gran Sasso showed that europium-151 decays to promethium-147 with the half-life of 5 years; later measurements gave the half-life as (4.62 ± 0.95(stat.) ± 0.68(syst.)) × 1018 years. It has been shown that europium is "responsible" for about 12 grams of promethium in the Earth's crust. Alpha decays for europium-153 have not been found yet, and its theoretically calculated half-life is so high (due to low energy of decay) that this process will probably not be observed in the near future. Promethium can also be formed in nature as a product of spontaneous fission of uranium-238. Only trace amounts can be found in naturally occurring ores: a sample of pitchblende has been found to contain promethium at a concentration of four parts per quintillion (4) by mass. Uranium is thus "responsible" for 560 g of promethium in Earth's crust. Promethium has also been identified in the spectrum of the star HR 465 in Andromeda; it also has been found in HD 101065 (Przybylski's star) and HD 965. Because of the short half-life of promethium isotopes, they should be formed near the surface of those stars. History Searches for element 61 In 1902, Czech chemist Bohuslav Brauner found out that the differences in properties between neodymium and samarium were the largest between any two consecutive lanthanides in the sequence then known; as a conclusion, he suggested there was an element with intermediate properties between them. This prediction was supported in 1914 by Henry Moseley who, having discovered that atomic number was an experimentally measurable property of elements, found that a few atomic numbers had no known corresponding elements: the gaps were 43, 61, 72, 75, 85, and 87. With the knowledge of a gap in the periodic table several groups started to search for the predicted element among other rare earths in the natural environment. The first claim of a discovery was published by Luigi Rolla and Lorenzo Fernandes of Florence, Italy. After separating a mixture of a few rare earth elements nitrate concentrate from the Brazilian mineral monazite by fractionated crystallization, they yielded a solution containing mostly samarium. This solution gave x-ray spectra attributed to samarium and element 61. In honor of their city, they named element 61 "florentium". The results were published in 1926, but the scientists claimed that the experiments were done in 1924. Also in 1926, a group of scientists from the University of Illinois at Urbana–Champaign, Smith Hopkins and Len Yntema published the discovery of element 61. They named it "illinium", after the university. Both of these reported discoveries were shown to be erroneous because the spectrum line that "corresponded" to element 61 was identical to that of didymium; the lines thought to belong to element 61 turned out to belong to a few impurities (barium, chromium, and platinum). In 1934, Josef Mattauch finally formulated the isobar rule. One of the indirect consequences of this rule was that element 61 was unable to form stable isotopes. From 1938, a nuclear experiment was conducted by H. B. Law et al. at the Ohio State University. Nuclides were produced in 1941 which were not radioisotopes of neodymium or samarium, and the name "cyclonium" was proposed, but there was a lack of chemical proof that element 61 was produced and the discovery was not largely recognized. Discovery and synthesis of promethium metal Promethium was first produced and characterized at Oak Ridge National Laboratory (Clinton Laboratories at that time) in 1945 by Jacob A. Marinsky, Lawrence E. Glendenin and Charles D. Coryell by separation and analysis of the fission products of uranium fuel irradiated in the graphite reactor; however, being too busy with military-related research during World War II, they did not announce their discovery until 1947. The original proposed name was "clintonium", after the laboratory where the work was conducted; however, the name "prometheum" was suggested by Grace Mary Coryell, the wife of one of the discoverers. It is derived from Prometheus, the Titan in Greek mythology who stole fire from Mount Olympus and brought it down to humans and symbolizes "both the daring and the possible misuse of the mankind intellect". The spelling was then changed to "promethium", as this was in accordance with most other metals. In 1963, promethium(III) fluoride was used to make promethium metal. Provisionally purified from impurities of samarium, neodymium, and americium, it was put into a tantalum crucible which was located in another tantalum crucible; the outer crucible contained lithium metal (10 times excess compared to promethium). After creating a vacuum, the chemicals were mixed to produce promethium metal: PmF3 + 3 Li → Pm + 3 LiF The promethium sample produced was used to measure a few of the metal's properties, such as its melting point. In 1963, ion-exchange methods were used at ORNL to prepare about ten grams of promethium from nuclear reactor fuel processing wastes. Promethium can be either recovered from the byproducts of uranium fission or produced by bombarding 146Nd with neutrons, turning it into 147Nd, which decays into 147Pm through beta decay with a half-life of 11 days. Production The production methods for different isotopes vary, and only those for promethium-147 are given because it is the only isotope with industrial applications. Promethium-147 is produced in large quantities (compared to other isotopes) by bombarding uranium-235 with thermal neutrons. The output is relatively high, at 2.6% of the total product. Another way to produce promethium-147 is via neodymium-147, which decays to promethium-147 with a short half-life. Neodymium-147 can be obtained either by bombarding enriched neodymium-146 with thermal neutrons or by bombarding a uranium carbide target with energetic protons in a particle accelerator. Another method is to bombard uranium-238 with fast neutrons to cause fast fission, which, among multiple reaction products, creates promethium-147. As early as the 1960s, Oak Ridge National Laboratory could produce 650 grams of promethium per year and was the world's only large-volume synthesis facility. Gram-scale production of promethium has been discontinued in the U.S. in the early 1980s, but will possibly be resumed after 2010 at the High Flux Isotope Reactor. In 2010, Russia was the only country producing promethium-147 on a relatively large scale. Applications Only promethium-147 has uses outside laboratories. It is obtained as the oxide or chloride, in milligram quantities. This isotope has a relatively long half-life, does not emit gamma rays, and its radiation has a relatively small penetration depth in matter. Some signal lights use a luminous paint containing a phosphor that absorbs the beta radiation emitted by promethium-147 and emits light. This isotope does not cause aging of the phosphor, as alpha emitters do, and therefore the light emission is stable for a few years. Originally, radium-226 was used for the purpose, but it was later replaced by promethium-147 and tritium (hydrogen-3). Promethium may be favored over tritium for nuclear safety. In atomic batteries, the beta particles emitted by promethium-147 are converted into electric current by sandwiching a small promethium source between two semiconductor plates. These batteries have a useful lifetime of about five years. The first promethium-based battery was assembled in 1964 and generated "a few milliwatts of power from a volume of about 2 cubic inches, including shielding". Promethium is also used to measure the thickness of materials by measuring the amount of radiation from a promethium source that passes through the sample. It has possible future uses in portable X-ray sources, and as auxiliary heat or power sources for space probes and satellites (although the alpha emitter plutonium-238 has become standard for most space-exploration-related uses). Promethium-147 is also used, albeit in very small quantities (less than 330nCi), in some Philips CFL (Compact Fluorescent Lamp) glow switches in the PLC 22W/28W 15mm CFL range. Precautions The element has no biological role. Promethium-147 can emit gamma rays, which are dangerous for all lifeforms, during its beta decay. Interactions with tiny quantities of promethium-147 are not hazardous if certain precautions are observed. In general, gloves, footwear covers, safety glasses, and an outer layer of easily removed protective clothing should be used. It is not known what human organs are affected by interaction with promethium; a possible candidate is the bone tissues. Sealed promethium-147 is not dangerous. However, if the packaging is damaged, then promethium becomes dangerous to the environment and humans. If radioactive contamination is found, the contaminated area should be washed with water and soap, but, even though promethium mainly affects the skin, the skin should not be abraded. If a promethium leak is found, the area should be identified as hazardous and evacuated, and emergency services must be contacted. No dangers from promethium aside from the radioactivity are known.
Physical sciences
Chemical elements_2
null
23322
https://en.wikipedia.org/wiki/Protactinium
Protactinium
Protactinium is a chemical element; it has symbol Pa and atomic number 91. It is a dense, radioactive, silvery-gray actinide metal which readily reacts with oxygen, water vapor, and inorganic acids. It forms various chemical compounds, in which protactinium is usually present in the oxidation state +5, but it can also assume +4 and even +3 or +2 states. Concentrations of protactinium in the Earth's crust are typically a few parts per trillion, but may reach up to a few parts per million in some uraninite ore deposits. Because of its scarcity, high radioactivity, and high toxicity, there are currently no uses for protactinium outside scientific research, and for this purpose, protactinium is mostly extracted from spent nuclear fuel. The element was first identified in 1913 by Kazimierz Fajans and Oswald Helmuth Göhring and named "brevium" because of the short half-life of the specific isotope studied, protactinium-234m. A more stable isotope of protactinium, 231Pa, was discovered in 1917/18 by Lise Meitner in collaboration with Otto Hahn, and they named the element protactinium. In 1949, the IUPAC chose the name "protactinium" and confirmed Hahn and Meitner as its discoverers. The new name meant "(nuclear) precursor of actinium," suggesting that actinium is a product of radioactive decay of protactinium. John Arnold Cranston (working with Frederick Soddy and Ada Hitchins) is also credited with discovering the most stable isotope in 1915, but he delayed his announcement due to being called for service in the First World War. The longest-lived and most abundant (nearly 100%) naturally occurring isotope of protactinium, protactinium-231, has a half-life of 32,760 years and is a decay product of uranium-235. Much smaller trace amounts of the short-lived protactinium-234 and its nuclear isomer protactinium-234m occur in the decay chain of uranium-238. Protactinium-233 occurs as a result of the decay of thorium-233 as part of the chain of events necessary to produce uranium-233 by neutron irradiation of thorium-232. It is an undesired intermediate product in thorium-based nuclear reactors, and is therefore removed from the active zone of the reactor during the breeding process. Ocean science utilizes the element to understand the ancient ocean's geography. Analysis of the relative concentrations of various uranium, thorium, and protactinium isotopes in water and minerals is used in radiometric dating of sediments up to 175,000 years old, and in modeling of various geological processes. History In 1871, Dmitri Mendeleev predicted the existence of an element between thorium and uranium. The actinide series was unknown at the time, so Mendeleev positioned uranium below tungsten in group VI, and thorium below zirconium in group IV, leaving the space below tantalum in group V empty. Until the general acceptance of the actinide concept in the late 1940s, periodic tables were published with this structure. For a long time, chemists searched for eka-tantalum as an element with similar chemical properties to tantalum, making a discovery of protactinium nearly impossible. Tantalum's heavier analogue was later found to be the transuranic element dubnium – although dubnium is more chemically similar to protactinium, not tantalum. In 1900, William Crookes isolated protactinium as an intensely radioactive material from uranium; however, he could not characterize it as a new chemical element and thus named it uranium X (UX). Crookes dissolved uranium nitrate in ether, and the residual aqueous phase contained most of the and . His method was used into the 1950s to isolate and from uranium compounds. Protactinium was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the isotope 234mPa during their studies of the decay chains of uranium-238: → → → . They named the new element "brevium" (from the Latin word brevis, meaning brief or short) because of the short half-life of 1.16 minutes for (uranium X2). In 1917–18, two groups of scientists, Lise Meitner in collaboration with Otto Hahn of Germany and Frederick Soddy and John Cranston of Great Britain, independently discovered another isotope, 231Pa, having a much longer half-life of 32,760 years. Meitner changed the name "brevium" to protactinium as the new element was part of the decay chain of uranium-235 as the parent of actinium (from the prôtos, meaning "first, before"). The IUPAC confirmed this naming in 1949. The discovery of protactinium completed one of the last gaps in early versions of the periodic table, and brought fame to the involved scientists. Aristid von Grosse produced 2 milligrams of Pa2O5 in 1927, and in 1934 first isolated elemental protactinium from 0.1 milligrams of Pa2O5. He used two different procedures: in the first, protactinium oxide was irradiated by 35 keV electrons in vacuum. In the other, called the van Arkel–de Boer process, the oxide was chemically converted to a halide (chloride, bromide or iodide) and then reduced in a vacuum with an electrically heated metallic filament: 2 PaI5 → 2 Pa + 5 I2 In 1961, the United Kingdom Atomic Energy Authority (UKAEA) produced 127 grams of 99.9% pure protactinium-231 by processing 60 tonnes of waste material in a 12-stage process, at a cost of about US$500,000. For many years, this was the world's only significant supply of protactinium, which was provided to various laboratories for scientific studies. The Oak Ridge National Laboratory in the US provided protactinium at a cost of about US$280/gram. Isotopes Twenty-nine radioisotopes of protactinium have been discovered. The most stable are 231Pa with a half-life of 32,760 years, 233Pa with a half-life of 27 days, and 230Pa with a half-life of 17.4 days. All other isotopes have half-lives shorter than 1.6 days, and the majority of these have half-lives less than 1.8 seconds. Protactinium also has two nuclear isomers, 217mPa (half-life 1.2 milliseconds) and 234mPa (half-life 1.16 minutes). The primary decay mode for the most stable isotope 231Pa and lighter (211Pa to 231Pa) is alpha decay, producing isotopes of actinium. The primary mode for the heavier isotopes (232Pa to 239Pa) is beta decay, producing isotopes of uranium. Nuclear fission The longest-lived and most abundant isotope, 231Pa, can fission from fast neutrons exceeding ~1 MeV. 233Pa, the other isotope of protactinium produced in nuclear reactors, also has a fission threshold of 1 MeV. Occurrence Protactinium is one of the rarest and most expensive naturally occurring elements. It is found in the form of two isotopes – 231Pa and 234Pa, with the isotope 234Pa occurring in two different energy states. Nearly all natural protactinium is protactinium-231. It is an alpha emitter and is formed by the decay of uranium-235, whereas the beta radiating protactinium-234 is produced as a result of uranium-238 decay. Nearly all uranium-238 (99.8%) decays first to the shorter-lived 234mPa isomer. Protactinium occurs in uraninite (pitchblende) at concentrations of about 0.3-3 parts 231Pa per million parts (ppm) of ore. Whereas the usual content is closer to 0.3 ppm (e.g. in Jáchymov, Czech Republic), some ores from the Democratic Republic of the Congo have about 3 ppm. Protactinium is homogeneously dispersed in most natural materials and in water, but at much lower concentrations on the order of one part per trillion, corresponding to a radioactivity of 0.1 picocuries (pCi)/g. There is about 500 times more protactinium in sandy soil particles than in water, even when compared to water present in the same sample of soil. Much higher ratios of 2,000 and above are measured in loam soils and clays, such as bentonite. In nuclear reactors Two major protactinium isotopes, 231Pa and 233Pa, are produced from thorium in nuclear reactors; both are undesirable and are usually removed, thereby adding complexity to the reactor design and operation. In particular, 232Th, via (n, 2n) reactions, produces 231Th, which quickly decays to 231Pa (half-life 25.5 hours). The last isotope, while not a transuranic waste, has a long half-life of 32,760 years, and is a major contributor to the long-term radiotoxicity of spent nuclear fuel. Protactinium-233 is formed upon neutron capture by 232Th. It either further decays to uranium-233, or captures another neutron and converts into the non-fissile uranium-234. 233Pa has a relatively long half-life of 27 days and high cross section for neutron capture (the so-called "neutron poison"). Thus, instead of rapidly decaying to the useful 233U, a significant fraction of 233Pa converts to non-fissile isotopes and consumes neutrons, degrading reactor efficiency. To limit the loss of neutrons, 233Pa is extracted from the active zone of thorium molten salt reactors during their operation, so that it can only decay into 233U. Extraction of 233Pa is achieved using columns of molten bismuth with lithium dissolved in it. In short, lithium selectively reduces protactinium salts to protactinium metal, which is then extracted from the molten-salt cycle, while the molten bismuth is merely a carrier, selected due to its low melting point of 271 °C, low vapor pressure, good solubility for lithium and actinides, and immiscibility with molten halides. Preparation Before the advent of nuclear reactors, protactinium was separated for scientific experiments from uranium ores. Since reactors have become more common, it is mostly produced as an intermediate product of nuclear fission in thorium fuel cycle reactors as an intermediate in the production of the fissile uranium-233: ^{232}_{90}Th + ^{1}_{0}n -> ^{233}_{90}Th ->[\beta^-][22.3\ \ce{min}] ^{233}_{91}Pa ->[\beta^-][26.967\ \ce{d}] ^{233}_{92}U. The isotope 231Pa can be prepared by irradiating thorium-230 with slow neutrons, converting it to the beta-decaying thorium-231; or, by irradiating thorium-232 with fast neutrons, generating thorium-231 and 2 neutrons. Protactinium metal can be prepared by reduction of its fluoride with calcium, lithium, or barium at a temperature of 1300–1400 °C. Properties Protactinium is an actinide positioned in the periodic table to the left of uranium and to the right of thorium, and many of its physical properties are intermediate between its neighboring actinides. Protactinium is denser and more rigid than thorium, but is lighter than uranium; its melting point is lower than that of thorium, but higher than that of uranium. The thermal expansion, electrical, and thermal conductivities of these three elements are comparable and are typical of post-transition metals. The estimated shear modulus of protactinium is similar to that of titanium. Protactinium is a metal with silvery-gray luster that is preserved for some time in air. Protactinium easily reacts with oxygen, water vapor, and acids, but not with alkalis. At room temperature, protactinium crystallizes in the body-centered tetragonal structure, which can be regarded as distorted body-centered cubic lattice; this structure does not change upon compression up to 53 GPa. The structure changes to face-centered cubic (fcc) upon cooling from high temperature, at about 1200 °C. The thermal expansion coefficient of the tetragonal phase between room temperature and 700 °C is 9.9/°C. Protactinium is paramagnetic and no magnetic transitions are known for it at any temperature. It becomes superconductive at temperatures below 1.4 K. Protactinium tetrachloride is paramagnetic at room temperature, but becomes ferromagnetic when cooled to 182 K. Protactinium exists in two major oxidation states: +4 and +5, both in solids and solutions; and the +3 and +2 states, which have been observed in some solids. As the electron configuration of the neutral atom is [Rn]5f26d17s2, the +5 oxidation state corresponds to the low-energy (and thus favored) 5f0 configuration. Both +4 and +5 states easily form hydroxides in water, with the predominant ions being Pa(OH)3+, , , and Pa(OH)4, all of which are colorless. Other known protactinium ions include , , PaF3+, , , , and . Chemical compounds Here, a, b, and c are lattice constants in picometers, No is the space group number, and Z is the number of formula units per unit cell; fcc stands for the face-centered cubic symmetry. Density was not measured directly but calculated from the lattice parameters. Oxides and oxygen-containing salts Protactinium oxides are known for the metal oxidation states +2, +4, and +5. The most stable is the white pentoxide Pa2O5, which can be produced by igniting protactinium(V) hydroxide in air at a temperature of 500 °C. Its crystal structure is cubic, and the chemical composition is often non-stoichiometric, described as PaO2.25. Another phase of this oxide with orthorhombic symmetry has also been reported. The black dioxide PaO2 is obtained from the pentoxide by reducing it at 1550 °C with hydrogen. It is not readily soluble in either dilute or concentrated nitric, hydrochloric, or sulfuric acid, but easily dissolves in hydrofluoric acid. The dioxide can be converted back to pentoxide by heating in oxygen-containing atmosphere to 1100 °C. The monoxide PaO has only been observed as a thin coating on protactinium metal, but not in an isolated bulk form. Protactinium forms mixed binary oxides with various metals. With alkali metals A, the crystals have a chemical formula APaO3 and perovskite structure; A3PaO4 and distorted rock-salt structure; or A7PaO6, where oxygen atoms form a hexagonal close-packed lattice. In all of these materials, the protactinium ions are octahedrally coordinated. The pentoxide Pa2O5 combines with rare-earth metal oxides R2O3 to form various nonstoichiometric mixed-oxides, also of perovskite structure. Protactinium oxides are basic; they easily convert to hydroxides and can form various salts, such as sulfates, phosphates, nitrates, etc. The nitrate is usually white but can be brown due to radiolytic decomposition. Heating the nitrate in air at 400 °C converts it to the white protactinium pentoxide. The polytrioxophosphate Pa(PO3)4 can be produced by reacting the difluoride sulfate PaF2SO4 with phosphoric acid (H3PO4) under an inert atmosphere. Heating the product to about 900 °C eliminates the reaction by-products, which include hydrofluoric acid, sulfur trioxide, and phosphoric anhydride. Heating it to higher temperatures in an inert atmosphere decomposes Pa(PO3)4 into the diphosphate PaP2O7, which is analogous to diphosphates of other actinides. In the diphosphate, the PO3 groups form pyramids of C2v symmetry. Heating PaP2O7 in air to 1400 °C decomposes it into the pentoxides of phosphorus and protactinium. Halides Protactinium(V) fluoride forms white crystals where protactinium ions are arranged in pentagonal bipyramids and coordinated by 7 other ions. The coordination is the same in protactinium(V) chloride, but the color is yellow. The coordination changes to octahedral in the brown protactinium(V) bromide, but is unknown for protactinium(V) iodide. The protactinium coordination in all its tetrahalides is 8, but the arrangement is square antiprismatic in protactinium(IV) fluoride and dodecahedral in the chloride and bromide. Brown-colored protactinium(III) iodide has been reported, where protactinium ions are 8-coordinated in a bicapped trigonal prismatic arrangement. Protactinium(V) fluoride and protactinium(V) chloride have a polymeric structure of monoclinic symmetry. There, within one polymeric chain, all halide atoms lie in one graphite-like plane and form planar pentagons around the protactinium ions. The 7-coordination of protactinium originates from the five halide atoms and two bonds to protactinium atoms belonging to the nearby chains. These compounds easily hydrolyze in water. The pentachloride melts at 300 °C and sublimates at even lower temperatures. Protactinium(V) fluoride can be prepared by reacting protactinium oxide with either bromine pentafluoride or bromine trifluoride at about 600 °C, and protactinium(IV) fluoride is obtained from the oxide and a mixture of hydrogen and hydrogen fluoride at 600 °C; a large excess of hydrogen is required to remove atmospheric oxygen leaks into the reaction. Protactinium(V) chloride is prepared by reacting protactinium oxide with carbon tetrachloride at temperatures of 200–300 °C. The by-products (such as PaOCl3) are removed by fractional sublimation. Reduction of protactinium(V) chloride with hydrogen at about 800 °C yields protactinium(IV) chloride – a yellow-green solid that sublimes in vacuum at 400 °C. It can also be obtained directly from protactinium dioxide by treating it with carbon tetrachloride at 400 °C. Protactinium bromides are produced by the action of aluminium bromide, hydrogen bromide, carbon tetrabromide, or a mixture of hydrogen bromide and thionyl bromide on protactinium oxide. They can alternatively be produced by reacting protactinium pentachloride with hydrogen bromide or thionyl bromide. Protactinium(V) bromide has two similar monoclinic forms: one is obtained by sublimation at 400–410 °C, and another by sublimation at a slightly lower temperature of 390–400 °C. Protactinium iodides can be produced by reacting protactinium metal with elemental iodine at 600 °C, and by reacting Pa2O5 with AlO3 at 600 °C. Protactinium(III) iodide can be obtained by heating protactinium(V) iodide in vacuum. As with oxides, protactinium forms mixed halides with alkali metals. The most remarkable among these is Na3PaF8, where the protactinium ion is symmetrically surrounded by 8 F− ions, forming a nearly perfect cube. More complex protactinium fluorides are also known, such as Pa2F9 and ternary fluorides of the types MPaF6 (M = Li, Na, K, Rb, Cs or NH4), M2PaF7 (M = K, Rb, Cs or NH4), and M3PaF8 (M = Li, Na, Rb, Cs), all of which are white crystalline solids. The MPaF6 formula can be represented as a combination of MF and PaF5. These compounds can be obtained by evaporating a hydrofluoric acid solution containing both complexes. For the small alkali cations like Na, the crystal structure is tetragonal, whereas it becomes orthorhombic for larger cations K+, Rb+, Cs+ or NH4+. A similar variation was observed for the M2PaF7 fluorides: namely, the crystal symmetry was dependent on the cation and differed for Cs2PaF7 and M2PaF7 (M = K, Rb or NH4). Other inorganic compounds Oxyhalides and oxysulfides of protactinium are known. PaOBr3 has a monoclinic structure composed of double-chain units where protactinium has coordination 7 and is arranged into pentagonal bipyramids. The chains are interconnected through oxygen and bromine atoms, and each oxygen atom is related to three protactinium atoms. PaOS is a light-yellow, non-volatile solid with a cubic crystal lattice isostructural to that of other actinide oxysulfides. It is obtained by reacting protactinium(V) chloride with a mixture of hydrogen sulfide and carbon disulfide at 900 °C. In hydrides and nitrides, protactinium has a low oxidation state of about +3. The hydride is obtained by direct action of hydrogen on the metal at 250 °C, and the nitride is a product of ammonia and protactinium tetrachloride or pentachloride. This bright yellow solid is thermally stable to 800 °C in vacuum. Protactinium carbide (PaC) is formed by the reduction of protactinium tetrafluoride with barium in a carbon crucible at a temperature of about 1400 °C. Protactinium forms borohydrides, which include Pa(BH4)4. It has an unusual polymeric structure with helical chains, where the protactinium atom has coordination number of 12 and is surrounded by six BH4− ions. Organometallic compounds Protactinium(IV) forms a tetrahedral complex tetrakis(cyclopentadienyl)protactinium(IV) (or Pa(C5H5)4) with four cyclopentadienyl rings, which can be synthesized by reacting protactinium(IV) chloride with molten Be(C5H5)2. One ring can be substituted with a halide atom. Another organometallic complex is the golden-yellow bis(π-cyclooctatetraene) protactinium, or protactinocene (Pa(C8H8)2), which is analogous in structure to uranocene. There, the metal atom is sandwiched between two cyclooctatetraene ligands. Similar to uranocene, it can be prepared by reacting protactinium tetrachloride with dipotassium cyclooctatetraenide (K2C8H8) in tetrahydrofuran. Applications Although protactinium is situated in the periodic table between uranium and thorium, both of which have numerous applications, there are currently no uses for protactinium outside scientific research owing to its scarcity, high radioactivity, and high toxicity. Protactinium-231 arises naturally from the decay of natural uranium-235, and artificially in nuclear reactors by the reaction 232Th + n → 231Th + 2n and the subsequent beta decay of 231Th. It was once thought to be able to support a nuclear chain reaction, which could in principle be used to build nuclear weapons; the physicist once estimated the associated critical mass as . However, the possibility of criticality of 231Pa has since been ruled out. With the advent of highly sensitive mass spectrometers, an application of 231Pa as a tracer in geology and paleoceanography has become possible. In this application, the ratio of protactinium-231 to thorium-230 is used for radiometric dating of sediments which are up to 175,000 years old, and in modeling of the formation of minerals. In particular, its evaluation in oceanic sediments helped to reconstruct the movements of North Atlantic water bodies during the last melting of Ice Age glaciers. Some of the protactinium-related dating variations rely on analysis of the relative concentrations of several long-living members of the uranium decay chain – uranium, protactinium, and thorium, for example. These elements have 6, 5, and 4 valence electrons, thus favoring +6, +5, and +4 oxidation states respectively, and display different physical and chemical properties. Thorium and protactinium, but not uranium compounds, are poorly soluble in aqueous solutions and precipitate into sediments; the precipitation rate is faster for thorium than for protactinium. The concentration analysis for both protactinium-231 (half-life 32,760 years) and thorium-230 (half-life 75,380 years) improves measurement accuracy compared to when only one isotope is measured; this double-isotope method is also weakly sensitive to inhomogeneities in the spatial distribution of the isotopes and to variations in their precipitation rate. Precautions Protactinium is both toxic and highly radioactive; thus, it is handled exclusively in a sealed glove box. Its major isotope 231Pa has a specific activity of per gram and primarily emits alpha-particles with an energy of 5 MeV, which can be stopped by a thin layer of any material. However, it slowly decays, with a half-life of 32,760 years, into 227Ac, which has a specific activity of per gram, emits both alpha and beta radiation, and has a much shorter half-life of 22 years. 227Ac, in turn, decays into lighter isotopes with even shorter half-lives and much greater specific activities (SA). As protactinium is present in small amounts in most natural products and materials, it is ingested with food or water and inhaled with air. Only about 0.05% of ingested protactinium is absorbed into the blood and the remainder is excreted. From the blood, about 40% of the protactinium deposits in the bones, about 15% goes to the liver, 2% to the kidneys, and the rest leaves the body. The biological half-life of protactinium is about 50 years in the bones, whereas its biological half-life in other organs has a fast and slow component. For example, 70% of the protactinium in the liver has a biological half-life of 10 days, and the remaining 30% for 60 days. The corresponding values for kidneys are 20% (10 days) and 80% (60 days). In each affected organ, protactinium promotes cancer via its radioactivity. The maximum safe dose of Pa in the human body is , which corresponds to 0.5 micrograms of 231Pa. The maximum allowed concentrations of 231Pa in the air in Germany is .
Physical sciences
Chemical elements_2
null
23324
https://en.wikipedia.org/wiki/Platinum
Platinum
Platinum is a chemical element; it has symbol Pt and atomic number 78. It is a dense, malleable, ductile, highly unreactive, precious, silverish-white transition metal. Its name originates from Spanish , a diminutive of "silver". Platinum is a member of the platinum group of elements and group 10 of the periodic table of elements. It has six naturally occurring isotopes. It is one of the rarer elements in Earth's crust, with an average abundance of approximately 5 μg/kg. It occurs in some nickel and copper ores along with some native deposits, mostly in South Africa, which accounts for ~80% of the world production. Because of its scarcity in Earth's crust, only a few hundred tonnes are produced annually, and given its important uses, it is highly valuable and is a major precious metal commodity. Platinum is one of the least reactive metals. It has remarkable resistance to corrosion, even at high temperatures, and is therefore considered a noble metal. Consequently, platinum is often found chemically uncombined as native platinum. Because it occurs naturally in the alluvial sands of various rivers, it was first used by pre-Columbian South American natives to produce artifacts. It was referenced in European writings as early as the 16th century, but it was not until Antonio de Ulloa published a report on a new metal of Colombian origin in 1748 that it began to be investigated by scientists. Platinum is used in catalytic converters, laboratory equipment, electrical contacts and electrodes, platinum resistance thermometers, dentistry equipment, and jewelry. Platinum is used in the glass industry to manipulate molten glass, which does not "wet" platinum. As a heavy metal, it leads to health problems upon exposure to its salts; but due to its corrosion resistance, metallic platinum has not been linked to adverse health effects. Compounds containing platinum, such as cisplatin, oxaliplatin and carboplatin, are applied in chemotherapy against certain types of cancer. Characteristics Physical Pure platinum is a lustrous, ductile, and malleable, silver-white metal. Platinum is more ductile than gold, silver or copper, thus being the most ductile of pure metals, but it is less malleable than gold. Its physical characteristics and chemical stability make it useful for industrial applications. Its resistance to wear and tarnish is well suited to use in fine jewellery. Chemical Platinum has excellent resistance to corrosion. Bulk platinum does not oxidize in air at any temperature, but it forms a thin surface film of that can be easily removed by heating to about 400 °C. The most common oxidation states of platinum are +2 and +4. The +1 and +3 oxidation states are less common, and are often stabilized by metal bonding in bimetallic (or polymetallic) species. Tetracoordinate platinum(II) compounds tend to adopt 16-electron square planar geometries. Although elemental platinum is generally unreactive, it is attacked by chlorine, bromine, iodine, and sulfur. It reacts vigorously with fluorine at to form platinum tetrafluoride. Platinum is insoluble in hydrochloric and nitric acid, but dissolves in hot aqua regia (a mixture of nitric and hydrochloric acids), to form aqueous chloroplatinic acid, : As a soft acid, the ion has a great affinity for sulfide and sulfur ligands. Numerous DMSO complexes have been reported and care is taken in the choosing of reaction solvents. In 2007, the German scientist Gerhard Ertl won the Nobel Prize in Chemistry for determining the detailed molecular mechanisms of the catalytic oxidation of carbon monoxide over platinum (catalytic converter). Isotopes Platinum has six naturally occurring isotopes: , , , , , and . The most abundant of these is , comprising 33.83% of all platinum. It is the only stable isotope with a non-zero spin. The spin of 1/2 and other favourable magnetic properties of the nucleus are utilised in NMR. Due to its spin and large abundance, satellite peaks are also often observed in and NMR spectroscopy (e.g., for Pt-phosphine and Pt-alkyl complexes). is the least abundant at only 0.01%. Of the naturally occurring isotopes, only is unstable, though it decays with a half-life of 6.5 years, causing an activity of 15 Bq/kg of natural platinum. Other isotopes can undergo alpha decay, but their decay has never been observed, therefore they are considered stable. Platinum also has 38 synthetic isotopes ranging in atomic mass from 165 to 208, making the total number of known isotopes 44. The least stable of these are and , with half-lives of 260 μs, whereas the most stable is with a half-life of 50 years. Most platinum isotopes decay by some combination of beta decay and alpha decay. , , and decay primarily by electron capture. and are predicted to have energetically favorable double beta decay paths. Occurrence Platinum is an extremely rare metal, occurring at a concentration of only 0.005 ppm in Earth's crust. Sometimes mistaken for silver, platinum is often found chemically uncombined as native platinum and as alloy with the other platinum-group metals and iron mostly. Most often the native platinum is found in secondary deposits in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department, Colombia are still a source for platinum-group metals. Another large alluvial deposit is in the Ural Mountains, Russia, and it is still mined. In nickel and copper deposits, platinum-group metals occur as sulfides (e.g., , tellurides (e.g., ), antimonides (PdSb), and arsenides (e.g. ), and as end alloys with nickel or copper. Platinum arsenide, sperrylite (), is a major source of platinum associated with nickel ores in the Sudbury Basin deposit in Ontario, Canada. At Platinum, Alaska, about was mined between 1927 and 1975. The mine ceased operations in 1990. The rare sulfide mineral cooperite, , contains platinum along with palladium and nickel. Cooperite occurs in the Merensky Reef within the Bushveld complex, Gauteng, South Africa. In 1865, chromites were identified in the Bushveld region of South Africa, followed by the discovery of platinum in 1906. In 1924, the geologist Hans Merensky discovered a large supply of platinum in the Bushveld Igneous Complex in South Africa. The specific layer he found, named the Merensky Reef, contains around 75% of the world's known platinum. The large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin, Canada, are the two other large deposits. In the Sudbury Basin, the huge quantities of nickel ore processed make up for the fact platinum is present as only 0.5 ppm in the ore. Smaller reserves can be found in the United States, for example in the Absaroka Range in Montana. In 2010, South Africa was the top producer of platinum, with an almost 77% share, followed by Russia at 13%; world production in 2010 was . Large platinum deposits are present in the state of Tamil Nadu, India. Platinum exists in higher abundances on the Moon and in meteorites. Correspondingly, platinum is found in slightly higher abundances at sites of bolide impact on Earth that are associated with resulting post-impact volcanism, and can be mined economically; the Sudbury Basin is one such example. Compounds Halides Hexachloroplatinic acid mentioned above is probably the most important platinum compound, as it serves as the precursor for many other platinum compounds. By itself, it has various applications in photography, zinc etchings, indelible ink, plating, mirrors, porcelain coloring, and as a catalyst. Treatment of hexachloroplatinic acid with an ammonium salt, such as ammonium chloride, gives ammonium hexachloroplatinate, which is relatively insoluble in ammonium solutions. Heating this ammonium salt in the presence of hydrogen reduces it to elemental platinum. Potassium hexachloroplatinate is similarly insoluble, and hexachloroplatinic acid has been used in the determination of potassium ions by gravimetry. When hexachloroplatinic acid is heated, it decomposes through platinum(IV) chloride and platinum(II) chloride to elemental platinum, although the reactions do not occur stepwise: All three reactions are reversible. Platinum(II) and platinum(IV) bromides are known as well. Platinum hexafluoride is a strong oxidizer capable of oxidizing oxygen. Oxides Platinum(IV) oxide, , also known as "Adams' catalyst", is a black powder that is soluble in potassium hydroxide (KOH) solutions and concentrated acids. and the less common both decompose upon heating. Platinum(II,IV) oxide, , is formed in the following reaction: Other compounds Unlike palladium acetate, platinum(II) acetate is not commercially available. Where a base is desired, the halides have been used in conjunction with sodium acetate. The use of platinum(II) acetylacetonate has also been reported. Several barium platinides have been synthesized in which platinum exhibits negative oxidation states ranging from −1 to −2. These include BaPt, , and . Caesium platinide, , a dark-red transparent crystalline compound has been shown to contain Pt anions. Platinum also exhibits negative oxidation states at surfaces reduced electrochemically. The negative oxidation states exhibited by platinum are unusual for metallic elements, and they are attributed to the relativistic stabilization of the 6s orbitals. It is predicted that even the cation in which platinum exists in the +10 oxidation state may be achievable. Zeise's salt, containing an ethylene ligand, was one of the first organometallic compounds discovered. is a commercially available olefin complex, which contains easily displaceable cod ligands ("cod" being an abbreviation of 1,5-cyclooctadiene). The cod complex and the halides are convenient starting points to platinum chemistry. Cisplatin, or is the first of a series of square planar platinum(II)-containing chemotherapy drugs. Others include carboplatin and oxaliplatin. These compounds are capable of crosslinking DNA, and kill cells by similar pathways to alkylating chemotherapeutic agents. (Side effects of cisplatin include nausea and vomiting, hair loss, tinnitus, hearing loss, and nephrotoxicity.) Organoplatinum compounds such as the above antitumour agents, as well as soluble inorganic platinum complexes, are routinely characterised using nuclear magnetic resonance spectroscopy. History Early uses Archaeologists have discovered traces of platinum in the gold used in ancient Egyptian burials. For example, a small box from burial of Shepenupet II was found to be decorated with gold-platinum hieroglyphics. However, the extent of early Egyptians' knowledge of the metal is unclear. It is quite possible they did not recognize there was platinum in their gold. The metal was used by Native Americans near modern-day Esmeraldas, Ecuador to produce artifacts of a white gold-platinum alloy. Archeologists usually associate the tradition of platinum-working in South America with the La Tolita Culture ( BCE – 200 CE), but precise dates and location are difficult, as most platinum artifacts from the area were bought secondhand through the antiquities trade rather than obtained by direct archeological excavation. To work the metal, they would combine gold and platinum powders by sintering. The resulting gold–platinum alloy would then be soft enough to shape with tools. The platinum used in such objects was not the pure element, but rather a naturally occurring mixture of the platinum group metals, with small amounts of palladium, rhodium, and iridium. European discovery The first European reference to platinum appears in 1557 in the writings of the Italian humanist Julius Caesar Scaliger as a description of an unknown noble metal found between Darién and Mexico, "which no fire nor any Spanish artifice has yet been able to liquefy". From their first encounters with platinum, the Spanish generally saw the metal as a kind of impurity in gold, and it was treated as such. It was often simply thrown away, and there was an official decree forbidding the adulteration of gold with platinum impurities. In 1735, Antonio de Ulloa and Jorge Juan y Santacilia saw Native Americans mining platinum while the Spaniards were travelling through Colombia and Peru for eight years. Ulloa and Juan found mines with the whitish metal nuggets and took them home to Spain. Antonio de Ulloa returned to Spain and established the first mineralogy lab in Spain and was the first to systematically study platinum, which was in 1748. His historical account of the expedition included a description of platinum as being neither separable nor calcinable. Ulloa also anticipated the discovery of platinum mines. After publishing the report in 1748, Ulloa did not continue to investigate the new metal. In 1758, he was sent to superintend mercury mining operations in Huancavelica. In 1741, Charles Wood, a British metallurgist, found various samples of Colombian platinum in Jamaica, which he sent to William Brownrigg for further investigation. In 1750, after studying the platinum sent to him by Wood, Brownrigg presented a detailed account of the metal to the Royal Society, stating that he had seen no mention of it in any previous accounts of known minerals. Brownrigg also made note of platinum's extremely high melting point and refractoriness toward borax. Other chemists across Europe soon began studying platinum, including Andreas Sigismund Marggraf, Torbern Bergman, Jöns Jakob Berzelius, William Lewis, and Pierre Macquer. In 1752, Henrik Scheffer published a detailed scientific description of the metal, which he referred to as "white gold", including an account of how he succeeded in fusing platinum ore with the aid of arsenic. Scheffer described platinum as being less pliable than gold, but with similar resistance to corrosion. Means of malleability Karl von Sickingen researched platinum extensively in 1772. He succeeded in making malleable platinum by alloying it with gold, dissolving the alloy in hot aqua regia, precipitating the platinum with ammonium chloride, igniting the ammonium chloroplatinate, and hammering the resulting finely divided platinum to make it cohere. Franz Karl Achard made the first platinum crucible in 1784. He worked with the platinum by fusing it with arsenic, then later volatilizing the arsenic. Because the other platinum-family members were not discovered yet (platinum was the first in the list), Scheffer and Sickingen made the false assumption that due to its hardness—which is slightly more than for pure iron—platinum would be a relatively non-pliable material, even brittle at times, when in fact its ductility and malleability are close to that of gold. Their assumptions could not be avoided because the platinum they experimented with was highly contaminated with minute amounts of platinum-family elements such as osmium and iridium, amongst others, which embrittled the platinum alloy. Alloying this impure platinum residue called "plyoxen" with gold was the only solution at the time to obtain a pliable compound, but nowadays, very pure platinum is available and extremely long wires can be drawn from pure platinum, very easily, due to its crystalline structure, which is similar to that of many soft metals. In 1786, Charles III of Spain provided a library and laboratory to Pierre-François Chabaneau to aid in his research of platinum. Chabaneau succeeded in removing various impurities from the ore, including gold, mercury, lead, copper, and iron. This led him to believe he was working with a single metal, but in truth the ore still contained the yet-undiscovered platinum-group metals. This led to inconsistent results in his experiments. At times, the platinum seemed malleable, but when it was alloyed with iridium, it would be much more brittle. Sometimes the metal was entirely incombustible, but when alloyed with osmium, it would volatilize. After several months, Chabaneau succeeded in producing 23 kilograms of pure, malleable platinum by hammering and compressing the sponge form while white-hot. Chabeneau realized the infusibility of platinum would lend value to objects made of it, and so started a business with Joaquín Cabezas producing platinum ingots and utensils. This started what is known as the "platinum age" in Spain. Production Platinum, along with the rest of the platinum-group metals, is obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper, noble metals such as silver, gold and the platinum-group metals as well as selenium and tellurium settle to the bottom of the cell as "anode mud", which forms the starting point for the extraction of the platinum-group metals. If pure platinum is found in placer deposits or other ores, it is isolated from them by various methods of subtracting impurities. Because platinum is significantly denser than many of its impurities, the lighter impurities can be removed by simply floating them away in a liquid. Platinum is paramagnetic, whereas nickel and iron are both ferromagnetic. These two impurities are thus removed by running an electromagnet over the mixture. Because platinum has a higher melting point than most other substances, many impurities can be burned or melted away without melting the platinum. Finally, platinum is resistant to hydrochloric and sulfuric acids, whereas other substances are readily attacked by them. Metal impurities can be removed by stirring the mixture in either of the two acids and recovering the remaining platinum. One suitable method for purification for the raw platinum, which contains platinum, gold, and the other platinum-group metals, is to process it with aqua regia, in which palladium, gold and platinum are dissolved, whereas osmium, iridium, ruthenium and rhodium stay unreacted. The gold is precipitated by the addition of iron(II) chloride and after filtering off the gold, the platinum is precipitated as ammonium chloroplatinate by the addition of ammonium chloride. Ammonium chloroplatinate can be converted to platinum by heating. Unprecipitated hexachloroplatinate(IV) may be reduced with elemental zinc, and a similar method is suitable for small scale recovery of platinum from laboratory residues. Mining and refining platinum has environmental impacts. Applications Of the 218 tonnes of platinum sold in 2014, 98 tonnes were used for vehicle emissions control devices (45%), 74.7 tonnes for jewelry (34%), 20.0 tonnes for chemical production and petroleum refining (9.2%), and 5.85 tonnes for electrical applications such as hard disk drives (2.7%). The remaining 28.9 tonnes went to various other minor applications, such as medicine and biomedicine, glassmaking equipment, investment, electrodes, anticancer drugs, oxygen sensors, spark plugs and turbine engines. Catalyst The most common use of platinum is as a catalyst in chemical reactions, often as platinum black. It has been employed as a catalyst since the early 19th century, when platinum powder was used to catalyze the ignition of hydrogen. Its most important application is in automobiles as a catalytic converter, which allows the complete combustion of low concentrations of unburned hydrocarbons from the exhaust into carbon dioxide and water vapor. Platinum is also used in the petroleum industry as a catalyst in a number of separate processes, but especially in catalytic reforming of straight-run naphthas into higher-octane gasoline that becomes rich in aromatic compounds. , also known as Adams' catalyst, is used as a hydrogenation catalyst, specifically for vegetable oils. Platinum also strongly catalyzes the decomposition of hydrogen peroxide into water and oxygen and it is used in fuel cells as a catalyst for the reduction of oxygen. Green energy transition As a fuel cell catalyst, platinum enables hydrogen and oxygen reactions to take place at an optimum rate. It is used in platinum-based proton exchange membrane (PEM) technologies required in green hydrogen production as well as fuel cell electric vehicle adoption (FCEV). Standard From 1889 to 1960, the meter was defined as the length of a platinum-iridium (90:10) alloy bar, known as the international prototype meter. The previous bar was made of platinum in 1799. Until May 2019, the kilogram was defined as the mass of the international prototype of the kilogram, a cylinder of the same platinum-iridium alloy made in 1879. The Standard Platinum Resistance Thermometer (SPRT) is one of the four types of thermometers used to define the International Temperature Scale of 1990 (ITS-90), the international calibration standard for temperature measurements. The resistance wire in the thermometer is made of pure platinum (NIST manufactured the wires from platinum bar stock with a chemical purity of 99.999% by weight). In addition to laboratory uses, Platinum Resistance Thermometry (PRT) also has many industrial applications, industrial standards include ASTM E1137 and IEC 60751. The standard hydrogen electrode also uses a platinized platinum electrode due to its corrosion resistance, and other attributes. As an investment Platinum is a precious metal commodity; its bullion has the ISO currency code of XPT. Coins, bars, and ingots are traded or collected. Platinum finds use in jewellery, usually as a 90–95% alloy, due to its inertness. It is used for this purpose for its prestige and inherent bullion value. Jewellery trade publications advise jewellers to present minute surface scratches (which they term patina) as a desirable feature in an attempt to enhance value of platinum products. In watchmaking, Vacheron Constantin, Patek Philippe, Rolex, Breitling, and other companies use platinum for producing their limited edition watch series. Watchmakers appreciate the unique properties of platinum, as it neither tarnishes nor wears out (the latter quality relative to gold). During periods of sustained economic stability and growth, the price of platinum tends to be as much as twice the price of gold, whereas during periods of economic uncertainty, the price of platinum tends to decrease due to reduced industrial demand, falling below the price of gold. Gold prices are more stable in slow economic times, as gold is considered a safe haven. Although gold is also used in industrial applications, especially in electronics due to its use as a conductor, its demand is not so driven by industrial uses. In the 18th century, platinum's rarity made King Louis XV of France declare it the only metal fit for a king. Other uses In the laboratory, platinum wire is used for electrodes; platinum pans and supports are used in thermogravimetric analysis because of the stringent requirements of chemical inertness upon heating to high temperatures (~1000 °C). Platinum is used as an alloying agent for various metal products, including fine wires, noncorrosive laboratory containers, medical instruments, dental prostheses, electrical contacts, and thermocouples. Platinum-cobalt, an alloy of roughly three parts platinum and one part cobalt, is used to make relatively strong permanent magnets. Platinum-based anodes are used in ships, pipelines, and steel piers. Platinum drugs are used to treat a wide variety of cancers, including testicular and ovarian carcinomas, melanoma, small-cell and non-small-cell lung cancer, myelomas and lymphomas. Symbol of prestige in marketing Platinum's rarity as a metal has caused advertisers to associate it with exclusivity and wealth. "Platinum" debit and credit cards have greater privileges than "gold" cards. "Platinum awards" are the second highest possible, ranking above "gold", "silver" and "bronze", but below diamond. For example, in the United States, a musical album that has sold more than 1 million copies will be credited as "platinum", whereas an album that has sold more than 10 million copies will be certified as "diamond". Some products, such as blenders and vehicles, with a silvery-white color are identified as "platinum". Platinum is considered a precious metal, although its use is not as common as the use of gold or silver. The frame of the Crown of Queen Elizabeth The Queen Mother, manufactured for her coronation as Consort of King George VI, is made of platinum. It was the first British crown to be made of this particular metal. Health problems According to the Centers for Disease Control and Prevention, short-term exposure to platinum salts may cause irritation of the eyes, nose, and throat, and long-term exposure may cause both respiratory and skin allergies. The current OSHA standard is 2 micrograms per cubic meter of air averaged over an 8-hour work shift. The National Institute for Occupational Safety and Health has set a recommended exposure limit (REL) for platinum as 1 mg/m3 over an 8-hour workday. As platinum is a catalyst in the manufacture of the silicone rubber and gel components of several types of medical implants (breast implants, joint replacement prosthetics, artificial lumbar discs, vascular access ports, etc.), the possibility that platinum could enter the body and cause adverse effects has merited study. The Food and Drug Administration and other institutions have reviewed the issue and found no evidence to suggest toxicity in vivo. Chemically unbounded platinum has been identified by the FDA as a "fake cancer 'cure'". The misunderstanding is created by healthcare workers who are using inappropriately the name of the metal as a slang term for platinum-based chemotherapy medications like cisplatin. They are platinum compounds, not the metal itself.
Physical sciences
Chemical elements_2
null
23325
https://en.wikipedia.org/wiki/Polonium
Polonium
Polonium is a chemical element; it has symbol Po and atomic number 84. A rare and highly radioactive metal (although sometimes classified as a metalloid) with no stable isotopes, polonium is a chalcogen and chemically similar to selenium and tellurium, though its metallic character resembles that of its horizontal neighbors in the periodic table: thallium, lead, and bismuth. Due to the short half-life of all its isotopes, its natural occurrence is limited to tiny traces of the fleeting polonium-210 (with a half-life of 138 days) in uranium ores, as it is the penultimate daughter of natural uranium-238. Though longer-lived isotopes exist, such as the 124 years half-life of polonium-209, they are much more difficult to produce. Today, polonium is usually produced in milligram quantities by the neutron irradiation of bismuth. Due to its intense radioactivity, which results in the radiolysis of chemical bonds and radioactive self-heating, its chemistry has mostly been investigated on the trace scale only. Polonium was discovered on July 18, 1898 by Marie Skłodowska-Curie and Pierre Curie, when it was extracted from the uranium ore pitchblende and identified solely by its strong radioactivity: it was the first element to be discovered in this way. Polonium was named after Marie Skłodowska-Curie's homeland of Poland, which at the time was partitioned between three countries. Polonium has few applications, and those are related to its radioactivity: heaters in space probes, antistatic devices, sources of neutrons and alpha particles, and poison (e.g., poisoning of Alexander Litvinenko). It is extremely dangerous to humans. Characteristics 210Po is an alpha emitter that has a half-life of 138.4 days; it decays directly to its stable daughter isotope, 206Pb. A milligram (5 curies) of 210Po emits about as many alpha particles per second as 5 grams of 226Ra, which means it is 5,000 times more radioactive than radium. A few curies (1 curie equals 37 gigabecquerels, 1 Ci = 37 GBq) of 210Po emit a blue glow which is caused by ionisation of the surrounding air. About one in 100,000 alpha emissions causes an excitation in the nucleus which then results in the emission of a gamma ray with a maximum energy of 803 keV. Solid state form Polonium is a radioactive element that exists in two metallic allotropes. The alpha form is the only known example of a simple cubic crystal structure in a single atom basis at STP (space group Pmm, no. 221). The unit cell has an edge length of 335.2 picometers; the beta form is rhombohedral. The structure of polonium has been characterized by X-ray diffraction and electron diffraction. 210Po has the ability to become airborne with ease: if a sample is heated in air to , 50% of it is vaporized in 45 hours to form diatomic Po2 molecules, even though the melting point of polonium is and its boiling point is . More than one hypothesis exists for how polonium does this; one suggestion is that small clusters of polonium atoms are spalled off by the alpha decay. Chemistry The chemistry of polonium is similar to that of tellurium, although it also shows some similarities to its neighbor bismuth due to its metallic character. Polonium dissolves readily in dilute acids but is only slightly soluble in alkalis. Polonium solutions are first colored in pink by the Po2+ ions, but then rapidly become yellow because alpha radiation from polonium ionizes the solvent and converts Po2+ into Po4+. As polonium also emits alpha-particles after disintegration so this process is accompanied by bubbling and emission of heat and light by glassware due to the absorbed alpha particles; as a result, polonium solutions are volatile and will evaporate within days unless sealed. At pH about 1, polonium ions are readily hydrolyzed and complexed by acids such as oxalic acid, citric acid, and tartaric acid. Compounds Polonium has no common compounds, and almost all of its compounds are synthetically created; more than 50 of those are known. The most stable class of polonium compounds are polonides, which are prepared by direct reaction of two elements. Na2Po has the antifluorite structure, the polonides of Ca, Ba, Hg, Pb and lanthanides form a NaCl lattice, BePo and CdPo have the wurtzite and MgPo the nickel arsenide structure. Most polonides decompose upon heating to about 600 °C, except for HgPo that decomposes at ~300 °C and the lanthanide polonides, which do not decompose but melt at temperatures above 1000 °C. For example, the polonide of praseodymium (PrPo) melts at 1250 °C, and that of thulium (TmPo) melts at 2200 °C. PbPo is one of the very few naturally occurring polonium compounds, as polonium alpha decays to form lead. Polonium hydride () is a volatile liquid at room temperature prone to dissociation; it is thermally unstable. Water is the only other known hydrogen chalcogenide which is a liquid at room temperature; however, this is due to hydrogen bonding. The three oxides, PoO, PoO2 and PoO3, are the products of oxidation of polonium. Halides of the structure PoX2, PoX4 and PoF6 are known. They are soluble in the corresponding hydrogen halides, i.e., PoClX in HCl, PoBrX in HBr and PoI4 in HI. Polonium dihalides are formed by direct reaction of the elements or by reduction of PoCl4 with SO2 and with PoBr4 with H2S at room temperature. Tetrahalides can be obtained by reacting polonium dioxide with HCl, HBr or HI. Other polonium compounds include the polonite, potassium polonite; various polonate solutions; and the acetate, bromate, carbonate, citrate, chromate, cyanide, formate, (II) or (IV) hydroxide, nitrate, selenate, selenite, monosulfide, sulfate, disulfate or sulfite salts. A limited organopolonium chemistry is known, mostly restricted to dialkyl and diaryl polonides (R2Po), triarylpolonium halides (Ar3PoX), and diarylpolonium dihalides (Ar2PoX2). Polonium also forms soluble compounds with some ligands, such as 2,3-butanediol and thiourea. Oxides PoO PoO2 PoO3 Hydrides PoH2 Halides PoX2 (except PoF2) PoX4 PoF6 PoBr2Cl2 (salmon pink) Isotopes Polonium has 42 known isotopes, all of which are radioactive. They have atomic masses that range from 186 to 227 u. 210Po (half-life 138.376 days) is the most widely available and is manufactured via neutron capture by natural bismuth. It also naturally occurs as a trace in uranium ores, as it is the penultimate member of the decay chain of 238U. The longer-lived 209Po (half-life 124 years, longest-lived of all polonium isotopes) and 208Po (half-life 2.9 years) can be manufactured through the alpha, proton, or deuteron bombardment of lead or bismuth in a cyclotron. History Tentatively called "radium F", polonium was discovered by Marie and Pierre Curie in July 1898, and was named after Marie Curie's native land of Poland (). Poland at the time was under Russian, German, and Austro-Hungarian partition, and did not exist as an independent country. It was Curie's hope that naming the element after her native land would publicize its lack of independence. Polonium may be the first element named to highlight a political controversy. This element was the first one discovered by the Curies while they were investigating the cause of pitchblende radioactivity. Pitchblende, after removal of the radioactive elements uranium and thorium, was more radioactive than the uranium and thorium combined. This spurred the Curies to search for additional radioactive elements. They first separated out polonium from pitchblende in July 1898, and five months later, also isolated radium. German scientist Willy Marckwald successfully isolated 3 milligrams of polonium in 1902, though at the time he believed it was a new element, which he dubbed "radio-tellurium", and it was not until 1905 that it was demonstrated to be the same as polonium. In the United States, polonium was produced as part of the Manhattan Project's Dayton Project during World War II. Polonium and beryllium were the key ingredients of the 'Urchin' initiator at the center of the bomb's spherical pit. 'Urchin' initiated the nuclear chain reaction at the moment of prompt-criticality to ensure that the weapon did not fizzle. 'Urchin' was used in early U.S. weapons; subsequent U.S. weapons utilized a pulse neutron generator for the same purpose. Much of the basic physics of polonium was classified until after the war. The fact that a polonium-beryllium (Po-Be) initiator was used in the gun-type nuclear weapons was classified until the 1960s. The Atomic Energy Commission and the Manhattan Project funded human experiments using polonium on five people at the University of Rochester between 1943 and 1947. The people were administered between of polonium to study its excretion. Occurrence and production Polonium is a very rare element in nature because of the short half-lives of all its isotopes. Nine isotopes, from 210 to 218 inclusive, occur in traces as decay products: 210Po, 214Po, and 218Po occur in the decay chain of 238U; 211Po and 215Po occur in the decay chain of 235U; 212Po and 216Po occur in the decay chain of 232Th; and 213Po and 217Po occur in the decay chain of 237Np. (No primordial 237Np survives, but traces of it are continuously regenerated through (n,2n) knockout reactions in natural 238U.) Of these, 210Po is the only isotope with a half-life longer than 3 minutes. Polonium can be found in uranium ores at about 0.1 mg per metric ton (1 part in 1010), which is approximately 0.2% of the abundance of radium. The amounts in the Earth's crust are not harmful. Polonium has been found in tobacco smoke from tobacco leaves grown with phosphate fertilizers. Because it is present in small concentrations, isolation of polonium from natural sources is a tedious process. The largest batch of the element ever extracted, performed in the first half of the 20th century, contained only (9 mg) of polonium-210 and was obtained by processing 37 tonnes of residues from radium production. Polonium is now usually obtained by irradiating bismuth with high-energy neutrons or protons. In 1934, an experiment showed that when natural 209Bi is bombarded with neutrons, 210Bi is created, which then decays to 210Po via beta-minus decay. By irradiating certain bismuth salts containing light element nuclei such as beryllium, a cascading (α,n) reaction can also be induced to produce 210Po in large quantities. The final purification is done pyrochemically followed by liquid-liquid extraction techniques. Polonium may now be made in milligram amounts in this procedure which uses high neutron fluxes found in nuclear reactors. Only about 100 grams are produced each year, practically all of it in Russia, making polonium exceedingly rare. This process can cause problems in lead-bismuth based liquid metal cooled nuclear reactors such as those used in the Soviet Navy's K-27. Measures must be taken in these reactors to deal with the unwanted possibility of 210Po being released from the coolant. The longer-lived isotopes of polonium, 208Po and 209Po, can be formed by proton or deuteron bombardment of bismuth using a cyclotron. Other more neutron-deficient and more unstable isotopes can be formed by the irradiation of platinum with carbon nuclei. Applications Polonium-based sources of alpha particles were produced in the former Soviet Union. Such sources were applied for measuring the thickness of industrial coatings via attenuation of alpha radiation. Because of intense alpha radiation, a one-gram sample of 210Po will spontaneously heat up to above generating about 140 watts of power. Therefore, 210Po is used as an atomic heat source to power radioisotope thermoelectric generators via thermoelectric materials. For example, 210Po heat sources were used in the Lunokhod 1 (1970) and Lunokhod 2 (1973) Moon rovers to keep their internal components warm during the lunar nights, as well as the Kosmos 84 and 90 satellites (1965). The alpha particles emitted by polonium can be converted to neutrons using beryllium oxide, at a rate of 93 neutrons per million alpha particles. Po-BeO mixtures are used as passive neutron sources with a gamma-ray-to-neutron production ratio of 1.13 ± 0.05, lower than for nuclear fission-based neutron sources. Examples of Po-BeO mixtures or alloys used as neutron sources are a neutron trigger or initiator for nuclear weapons and for inspections of oil wells. About 1500 sources of this type, with an individual activity of , had been used annually in the Soviet Union. Polonium was also part of brushes or more complex tools that eliminate static charges in photographic plates, textile mills, paper rolls, sheet plastics, and on substrates (such as automotive) prior to the application of coatings. Alpha particles emitted by polonium ionize air molecules that neutralize charges on the nearby surfaces. Some anti-static brushes contain up to of 210Po as a source of charged particles for neutralizing static electricity. In the US, devices with no more than of (sealed) 210Po per unit can be bought in any amount under a "general license", which means that a buyer need not be registered by any authorities. Polonium needs to be replaced in these devices nearly every year because of its short half-life; it is also highly radioactive and therefore has been mostly replaced by less dangerous beta particle sources. Tiny amounts of 210Po are sometimes used in the laboratory and for teaching purposes—typically of the order of , in the form of sealed sources, with the polonium deposited on a substrate or in a resin or polymer matrix—are often exempt from licensing by the NRC and similar authorities as they are not considered hazardous. Small amounts of 210Po are manufactured for sale to the public in the United States as "needle sources" for laboratory experimentation, and they are retailed by scientific supply companies. The polonium is a layer of plating which in turn is plated with a material such as gold, which allows the alpha radiation (used in experiments such as cloud chambers) to pass while preventing the polonium from being released and presenting a toxic hazard. Polonium spark plugs were marketed by Firestone from 1940 to 1953. While the amount of radiation from the plugs was minuscule and not a threat to the consumer, the benefits of such plugs quickly diminished after approximately a month because of polonium's short half-life and because buildup on the conductors would block the radiation that improved engine performance. (The premise behind the polonium spark plug, as well as Alfred Matthew Hubbard's prototype radium plug that preceded it, was that the radiation would improve ionization of the fuel in the cylinder and thus allow the motor to fire more quickly and efficiently.) Biology and toxicity Overview Polonium can be hazardous and has no biological role. By mass, polonium-210 is around 250,000 times more toxic than hydrogen cyanide (the for 210Po is less than 1 microgram for an average adult (see below) compared with about 250 milligrams for hydrogen cyanide). The main hazard is its intense radioactivity (as an alpha emitter), which makes it difficult to handle safely. Even in microgram amounts, handling 210Po is extremely dangerous, requiring specialized equipment (a negative pressure alpha glove box equipped with high-performance filters), adequate monitoring, and strict handling procedures to avoid any contamination. Alpha particles emitted by polonium will damage organic tissue easily if polonium is ingested, inhaled, or absorbed, although they do not penetrate the epidermis and hence are not hazardous as long as the alpha particles remain outside the body and do not come near the eyes, which are living tissue. Wearing chemically resistant and intact gloves is a mandatory precaution to avoid transcutaneous diffusion of polonium directly through the skin. Polonium delivered in concentrated nitric acid can easily diffuse through inadequate gloves (e.g., latex gloves) or the acid may damage the gloves. Polonium does not have toxic chemical properties. It has been reported that some microbes can methylate polonium by the action of methylcobalamin. This is similar to the way in which mercury, selenium, and tellurium are methylated in living things to create organometallic compounds. Studies investigating the metabolism of polonium-210 in rats have shown that only 0.002 to 0.009% of polonium-210 ingested is excreted as volatile polonium-210. Acute effects The median lethal dose (LD50) for acute radiation exposure is about 4.5 Sv. The committed effective dose equivalent 210Po is 0.51 μSv/Bq if ingested, and 2.5 μSv/Bq if inhaled. A fatal 4.5 Sv dose can be caused by ingesting , about 50 nanograms (ng), or inhaling , about 10 ng. One gram of 210Po could thus in theory poison 20 million people, of whom 10 million would die. The actual toxicity of 210Po is lower than these estimates because radiation exposure that is spread out over several weeks (the biological half-life of polonium in humans is 30 to 50 days) is somewhat less damaging than an instantaneous dose. It has been estimated that a median lethal dose of 210Po is , or 0.089 micrograms (μg), still an extremely small amount. For comparison, one grain of table salt is about 0.06 mg = 60 μg. Long term (chronic) effects In addition to the acute effects, radiation exposure (both internal and external) carries a long-term risk of death from cancer of 5–10% per Sv. The general population is exposed to small amounts of polonium as a radon daughter in indoor air; the isotopes 214Po and 218Po are thought to cause the majority of the estimated 15,000–22,000 lung cancer deaths in the US every year that have been attributed to indoor radon. Tobacco smoking causes additional exposure to polonium. Regulatory exposure limits and handling The maximum allowable body burden for ingested 210Po is only , which is equivalent to a particle massing only 6.8 picograms. The maximum permissible workplace concentration of airborne 210Po is about 10 Bq/m3 ( μCi/cm3). The target organs for polonium in humans are the spleen and liver. As the spleen (150 g) and the liver (1.3 to 3 kg) are much smaller than the rest of the body, if the polonium is concentrated in these vital organs, it is a greater threat to life than the dose which would be suffered (on average) by the whole body if it were spread evenly throughout the body, in the same way as caesium or tritium (as T2O). 210Po is widely used in industry, and readily available with little regulation or restriction. In the US, a tracking system run by the Nuclear Regulatory Commission was implemented in 2007 to register purchases of more than of polonium-210 (enough to make up 5,000 lethal doses). The IAEA "is said to be considering tighter regulations ... There is talk that it might tighten the polonium reporting requirement by a factor of 10, to ." As of 2013, this is still the only alpha emitting byproduct material available, as a NRC Exempt Quantity, which may be held without a radioactive material license. Polonium and its compounds must be handled with caution inside special alpha glove boxes, equipped with HEPA filters and continuously maintained under depression to prevent the radioactive materials from leaking out. Gloves made of natural rubber (latex) do not properly withstand chemical attacks, a.o. by concentrated nitric acid commonly used to keep polonium in solution while minimizing its sorption onto glass. They do not provide sufficient protection against the contamination from polonium (diffusion of 210Po solution through the intact latex membrane, or worse, direct contact through tiny holes and cracks produced when the latex begins to suffer degradation by acids or UV from ambient light); additional surgical gloves are necessary (inside the glovebox to protect the main gloves when handling strong acids and bases, and also from outside to protect the operator hands against 210Po contamination from diffusion, or direct contact through glove defects). Chemically more resistant, and also denser, neoprene and butyl gloves shield alpha particles emitted by polonium better than natural rubber. The use of natural rubber gloves is not recommended for handling 210Po solutions. Cases of poisoning Despite the element's highly hazardous properties, circumstances in which polonium poisoning can occur are rare. Its extreme scarcity in nature, the short half-lives of all its isotopes, the specialised facilities and equipment needed to obtain any significant quantity, and safety precautions against laboratory accidents all make harmful exposure events unlikely. As such, only a handful of cases of radiation poisoning specifically attributable to polonium exposure have been confirmed. 20th century In response to concerns about the risks of occupational polonium exposure, quantities of 210Po were administered to five human volunteers at the University of Rochester from 1944 to 1947, in order to study its biological behaviour. These studies were funded by the Manhattan Project and the AEC. Four men and a woman participated, all suffering from terminal cancers, and ranged in age from their early thirties to early forties; all were chosen because experimenters wanted subjects who had not been exposed to polonium either through work or accident. 210Po was injected into four hospitalised patients, and orally given to a fifth. None of the administered doses (all ranging from 0.17 to 0.30 μCi kg−1) approached fatal quantities. The first documented death directly resulting from polonium poisoning occurred in the Soviet Union, on 10 July 1954. An unidentified 41-year-old man presented for medical treatment on 29 June, with severe vomiting and fever; the previous day, he had been working for five hours in an area in which, unknown to him, a capsule containing 210Po had depressurised and begun to disperse in aerosol form. Over this period, his total intake of airborne 210Po was estimated at 0.11 GBq (almost 25 times the estimated LD50 by inhalation of 4.5 MBq). Despite treatment, his condition continued to worsen and he died 13 days after the exposure event. From 1955 to 1957 the Windscale Piles had been releasing polonium-210. The Windscale fire brought the need for testing of the land downwind for radioactive material contamination, and this is how it was found. An estimate of 8.8 terabecquerels (240 Ci) of polonium-210 has been made. It has also been suggested that Irène Joliot-Curie's 1956 death from leukaemia was owed to the radiation effects of polonium. She was accidentally exposed in 1946 when a sealed capsule of the element exploded on her laboratory bench. As well, several deaths in Israel during 1957–1969 have been alleged to have resulted from 210Po exposure. A leak was discovered at a Weizmann Institute laboratory in 1957. Traces of 210Po were found on the hands of Professor Dror Sadeh, a physicist who researched radioactive materials. Medical tests indicated no harm, but the tests did not include bone marrow. Sadeh, one of his students, and two colleagues died from various cancers over the subsequent few years. The issue was investigated secretly, but there was never any formal admission of a connection between the leak and the deaths. The Church Rock uranium mill spill July 16, 1979 is reported to have released polonium-210. The report states animals had higher concentrations of lead-210, polonium-210 and radium-226 than the tissues from control animals. 21st century The cause of the 2006 death of Alexander Litvinenko, a former Russian FSB agent who had defected to the United Kingdom in 2001, was identified to be poisoning with a lethal dose of 210Po; it was subsequently determined that the 210Po had probably been deliberately administered to him by two Russian ex-security agents, Andrey Lugovoy and Dmitry Kovtun. As such, Litvinenko's death was the first (and, to date, only) confirmed instance in which polonium's extreme toxicity has been used with malicious intent. In 2011, an allegation surfaced that the death of Palestinian leader Yasser Arafat, who died on 11 November 2004 of uncertain causes, also resulted from deliberate polonium poisoning, and in July 2012, concentrations of 210Po many times more than normal were detected in Arafat's clothes and personal belongings by the Institut de Radiophysique in Lausanne, Switzerland. Even though Arafat's symptoms were acute gastroenteritis with diarrhoea and vomiting, the institute's spokesman said that despite the tests the symptoms described in Arafat's medical reports were not consistent with 210Po poisoning, and conclusions could not be drawn. In 2013 the team found levels of polonium in Arafat's ribs and pelvis 18 to 36 times the average, even though by this point in time the amount had diminished by a factor of 2 million. Forensic scientist Dave Barclay stated, "In my opinion, it is absolutely certain that the cause of his illness was polonium poisoning. ... What we have got is the smoking gun - the thing that caused his illness and was given to him with malice." Subsequently, French and Russian teams claimed that the elevated 210Po levels were not the result of deliberate poisoning, and did not cause Arafat's death. It has also been suspected that Russian businessman Roman Tsepov was killed with polonium. He had symptoms similar to Aleksander Litvinenko. Treatment It has been suggested that chelation agents, such as British anti-Lewisite (dimercaprol), can be used to decontaminate humans. In one experiment, rats were given a fatal dose of 1.45 MBq/kg (8.7 ng/kg) of 210Po; all untreated rats were dead after 44 days, but 90% of the rats treated with the chelation agent HOEtTTC remained alive for five months. Detection in biological specimens Polonium-210 may be quantified in biological specimens by alpha particle spectrometry to confirm a diagnosis of poisoning in hospitalized patients or to provide evidence in a medicolegal death investigation. The baseline urinary excretion of polonium-210 in healthy persons due to routine exposure to environmental sources is normally in a range of 5–15 mBq/day. Levels in excess of 30 mBq/day are suggestive of excessive exposure to the radionuclide. Occurrence in humans and the biosphere Polonium-210 is widespread in the biosphere, including in human tissues, because of its position in the uranium-238 decay chain. Natural uranium-238 in the Earth's crust decays through a series of solid radioactive intermediates including radium-226 to the radioactive noble gas radon-222, some of which, during its 3.8-day half-life, diffuses into the atmosphere. There it decays through several more steps to polonium-210, much of which, during its 138-day half-life, is washed back down to the Earth's surface, thus entering the biosphere, before finally decaying to stable lead-206. As early as the 1920s, French biologist Antoine Lacassagne, using polonium provided by his colleague Marie Curie, showed that the element has a specific pattern of uptake in rabbit tissues, with high concentrations, particularly in liver, kidney, and testes. More recent evidence suggests that this behavior results from polonium substituting for its congener sulfur, also in group 16 of the periodic table, in sulfur-containing amino-acids or related molecules and that similar patterns of distribution occur in human tissues. Polonium is indeed an element naturally present in all humans, contributing appreciably to natural background dose, with wide geographical and cultural variations, and particularly high levels in arctic residents, for example. Tobacco Polonium-210 in tobacco contributes to many of the cases of lung cancer worldwide. Most of this polonium is derived from lead-210 deposited on tobacco leaves from the atmosphere; the lead-210 is a product of radon-222 gas, much of which appears to originate from the decay of radium-226 from fertilizers applied to the tobacco soils. The presence of polonium in tobacco smoke has been known since the early 1960s. Some of the world's biggest tobacco firms researched ways to remove the substance—to no avail—over a 40-year period. The results were never published. Food Polonium is found in the food chain, especially in seafood.
Physical sciences
Chemical elements_2
null
23329
https://en.wikipedia.org/wiki/Pythonidae
Pythonidae
The Pythonidae, commonly known as pythons, are a family of nonvenomous snakes found in Africa, Asia, and Australia. Among its members are some of the largest snakes in the world. Ten genera and 39 species are currently recognized. Being naturally non-venomous, pythons must constrict their prey to induce cardiac arrest prior to consumption. Pythons will typically strike at and bite their prey of choice to gain hold of it; they then must use physical strength to constrict their prey, by coiling their muscular bodies around the animal, effectively suffocating it before swallowing whole. This is in stark contrast to venomous snakes such as the rattlesnake, for example, which delivers a swift, venomous bite but releases, waiting as the prey succumbs to envenomation before being consumed. Collectively, the pythons are well-documented and studied as constrictors, much like other non-venomous snakes, including the boas and even kingsnakes of the New World. Pythons are found in regions like sub-Saharan Africa, Southeast Asia, and Australia, with invasive populations of Burmese pythons in Everglades National Park, Florida and reticulated pythons in Puerto Rico. They are ambush predators that primarily kill prey by constriction, causing cardiac arrest. Pythons are oviparous, laying eggs that females incubate until they hatch. They possess premaxillary teeth, with the exception of adults in the Australian genus Aspidites. While many species are available in the exotic pet trade, caution is needed with larger species due to potential danger. The taxonomy of pythons has evolved, and they are now more closely related to sunbeam snakes and the Mexican burrowing python. Pythons are poached for their meat and skin, leading to a billion-dollar global trade. They can carry diseases, such as salmonella and leptospirosis, which can be transmitted to humans. Pythons are also used in African traditional medicine to treat ailments like rheumatism and mental illnesses. Their body parts, including blood and organs, are believed to have various healing properties. In some African cultures, pythons have significant roles in folklore and mythology, often symbolizing strength or having sacred status. Distribution and habitat Pythons are found in sub-Saharan Africa, Nepal, India, Sri Lanka, Bangladesh, Southeast Asia, southeastern Pakistan, southern China, the Philippines and Australia. Two known populations of invasive pythons exist in the Western Hemisphere. In the United States, an introduced population of Burmese pythons (Python bivittatus) has existed as an invasive species in Everglades National Park since the late 1990s. As of January 2023, estimates place the Floridian Burmese python population at around half a million. Local bounties are awarded and scientists study dead Burmese pythons to better understand breeding cycles and trends associated with rapid population explosion. The pythons readily prey on native North American fauna in Florida, including (but not limited to) American alligators, birds, bobcats, American bullfrogs, opossums, raccoons, river otters, white-tailed deer, and occasionally domestic pets and livestock. They are also known to prey on other invasive and introduced animals to Florida, such as the green iguana and nutria (coypu), though not at a rate as to lower their numbers rapidly or effectively. In Puerto Rico, a population of reticulated pythons (Malayopython reticulatus) are known to be currently established, with a remarkably high rate of albinism, suggesting establishment from domesticated pet stock. Records of reticulated pythons date back to as early as 2009, and the population was recognized as established by 2017. Conservation Many species have been hunted aggressively, which has greatly reduced the population of some, such as the Indian python (Python molurus) and the Ball python (Python regius). Behavior Most members of this family are ambush predators, in that they typically remain motionless in a camouflaged position, and then strike suddenly at passing prey. Attacks on humans, although known to occur, are extremely rare. Feeding Pythons use their sharp, backward-curving teeth, four rows in the upper jaw, two in the lower, to grasp prey which is then killed by constriction; after an animal has been grasped to restrain it, the python quickly wraps a number of coils around it. Death occurs primarily by cardiac arrest. Even the larger species, such as the reticulated python (Malayopython reticulatus), do not crush their prey to death. Larger specimens usually eat animals about the size of a domestic cat, but larger food items are known; some large Asian species have been known to take down adult deer, and the Central African rock python (Python sebae) has been known to eat antelope. The reticulated python is the only python species known to sometimes eat humans in its natural habitat in Sulawesi, Indonesia. All prey is swallowed whole, and may take several days or even weeks to fully digest. Reproduction Pythons are oviparous. This sets them apart from the family Boidae (boas), most of which bear live young (ovoviviparous). After they lay their eggs, females typically incubate them until they hatch. This is achieved by causing the muscles to "shiver", which raises the temperature of the body to a certain degree, and thus that of the eggs. Keeping the eggs at a constant temperature is essential for healthy embryo development. During the incubation period, females do not eat and leave only to bask to raise their body temperature. Captivity Most species in this family are available in the exotic pet trade. However, caution must be exercised with the larger species, as they can be dangerous; rare cases of large specimens killing their owners have been documented. Taxonomy Obsolete classification schemes—such as that of Boulenger (1890)—place pythons in Pythoninae, a subfamily of the boa family, Boidae. However, despite a superficial K-means clustering resemblance to boas, pythons are more closely related to the sunbeam snakes (Xenopeltis) and the Mexican burrowing python (Loxocemus). Genera Relationship with humans Poaching pythons Poaching of pythons is a lucrative business with the global python skin trade being an estimated US$1 billion as of 2012. Pythons are poached for their meat, mostly consumed locally as bushmeat and their skin, which is sent to Europe and North America for manufacture of accessories like bags, belts and shoes. The demand for poaching is increased because python farming is very expensive. In Cameroon bushmeat markets, the Central African rock python is commonly sold for meat and is very expensive at US$175. The poaching of the pythons is illegal in Cameroon under their wildlife law, but there is little to no enforcement. In Kenya, there has been an increase in snake farms to address the demand for snakeskin internationally, but there are health concerns for the workers, and danger due to poachers coming to the farms to hunt the snakes. Pythons and human health While pythons are not venomous, they do carry a host of potential health issues for humans. Pythons are disease vectors for multiple illnesses, including Salmonella, Chlamydia, Leptospirosis, Aeromoniasis, Campylobacteriosis, and Zygomycosis. These diseases may be transmitted to humans through excreted waste, open wounds, and contaminated water. A 2013 study found that Reptile-Associated Salmonella (RAS) is most common in young children who had been in contact with invasive pythons, with symptoms including "sepsis, meningitis, and bone and joint infection". Pythons are also integrated into some aspects of African health and belief use, often with the added risk of contacting zoonotic diseases. Python bodies and blood are used for African traditional medicines and other belief uses as well, one in-depth study of all animals used by the Yorubas of Nigeria for traditional medicine found that the African Python is used to cure rheumatism, snake poison, appeasing witches, and accident prevention. Python habitats, diets, and invasion into new areas also impact human health and prosperity. A University of Florida Institute of Food and Agriculture Sciences study found that the Burmese python, as an invasive species, enters new habitats and eats an increasing number of mammals, leaving limited species for mosquitoes to bite, forcing them to bite disease-carrying hispid cotton rats and then infect humans with the Everglades virus, a dangerous infection that is carried by very few animals. While direct human-python interactions can be potentially dangerous, the risk of zoonotic diseases is always a concern, whether considering medical and belief use in Nigeria or when addressing invasive species impacts in Florida. In 2022, a woman who lived near a lake area in south-eastern New South Wales state, Australia, was found to be infested with the Ophidascaris robertsi roundworm which is common in carpet pythons - non-venomous snakes found across much of Australia. Traditional use Skin Python skin has traditionally been used as the attire of choice for medicine men and healers. Typically, South African Zulu traditional healers will use python skin in ceremonial regalia. Pythons are viewed by the Zulu tradition to be a sign of power. This is likely why the skin is worn by traditional healers. Healers are seen as all-powerful since they have a wealth of knowledge, as well as accessibility to the ancestors. Fat Typically, species are attributed to healing various ailments based on their likeliness to a specific bodily attribute. For example, in many cultures, the python is seen as a strong and powerful creature. As a result, pythons are often prescribed as a method of increasing strength. It is very common for the body fat of pythons to be used to treat a large variation of issues such as joint pain, rheumatic pain, toothache and eye sight. Additionally, python fat has been used to treat those suffering from mental illnesses like psychosis. Their calm nature is thought to be of use to treat combative patients. The fat of the python is rubbed onto the body part that is in pain. To improve mental illnesses, it is often rubbed on the temple. The existence of evidence for genuine anti inflammatory and anti-microbial properties of the refined 'snake oil' ironic with respect to the expression "snake oil salesman". Blood Python blood plays another important role in traditional medicine. Many believe that python blood prevents the accumulation of fatty acids, triglycerides and lipids from reaching critically high levels. Additionally, their blood has been used as a source of iron for people who are anemic, which helps reduce fatigue.[The sources were not specific on the way this blood is administered; however, due to the use of snake blood in traditional treatments in other parts of the world for similar causes, it is likely that the patient drinks the blood in order to feel the effects. Feces The Sukuma tribe of Tanzania have been known to use python feces in order to treat back pain. The feces are frequently mixed with a little water, placed on the back, and left for two to three days. Organs In Nigeria, the gallbladder and liver of a python are used to treat poison or bites from other snakes. The python head has been used to "appease witches". Many traditional African cultures believe that they can be cursed by witches. In order to reverse spells and bad luck, traditional doctors will prescribe python heads. Folklore In northwestern Ghana, people see pythons as a savior and have taboos to prevent the snake from being harmed or eaten. Their folklore states that this is because a python once helped them flee from their enemies by transforming into a log to allow them to cross a river. In Botswana, San ritual practices surrounding pythons date back 70,000 years. In San mythology the python is a sacred creature that is highly respected. They believe that mankind was made by a python that moved in between hills to create stream beds. In Benin, Vodun practitioners believe that pythons symbolize strength and the spirit of Dagbe ["to do good" in Yoruba]. Annually, people sacrifice animals and proclaim their sins to pythons that are kept inside temples.
Biology and health sciences
Snakes
Animals
23335
https://en.wikipedia.org/wiki/Parsec
Parsec
The parsec (symbol: pc) is a unit of length used to measure the large distances to astronomical objects outside the Solar System, approximately equal to or (AU), i.e. . The parsec unit is obtained by the use of parallax and trigonometry, and is defined as the distance at which 1 AU subtends an angle of one arcsecond ( of a degree). The nearest star, Proxima Centauri, is about from the Sun: from that distance, the gap between the Earth and the Sun spans slightly less than one arcsecond. Most stars visible to the naked eye are within a few hundred parsecs of the Sun, with the most distant at a few thousand parsecs, and the Andromeda Galaxy at over 700,000 parsecs. The word parsec is a shortened form of a distance corresponding to a parallax of one second, coined by the British astronomer Herbert Hall Turner in 1913. The unit was introduced to simplify the calculation of astronomical distances from raw observational data. Partly for this reason, it is the unit preferred in astronomy and astrophysics, though in popular science texts and common usage the light-year remains prominent. Although parsecs are used for the shorter distances within the Milky Way, multiples of parsecs are required for the larger scales in the universe, including kiloparsecs (kpc) for the more distant objects within and around the Milky Way, megaparsecs (Mpc) for mid-distance galaxies, and gigaparsecs (Gpc) for many quasars and the most distant galaxies. In August 2015, the International Astronomical Union (IAU) passed Resolution B2 which, as part of the definition of a standardized absolute and apparent bolometric magnitude scale, mentioned an existing explicit definition of the parsec as exactly  au, or exactly  metres, given the IAU 2012 exact definition of the astronomical unit in metres. This corresponds to the small-angle definition of the parsec found in many astronomical references. History and derivation Imagining an elongated right triangle in space, where the shorter leg measures one au (astronomical unit, the average Earth–Sun distance) and the subtended angle of the vertex opposite that leg measures one arcsecond ( of a degree), the parsec is defined as the length of the adjacent leg. The value of a parsec can be derived through the rules of trigonometry. The distance from Earth whereupon the radius of its solar orbit subtends one arcsecond. One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky. The first measurement is taken from the Earth on one side of the Sun, and the second is taken approximately half a year later, when the Earth is on the opposite side of the Sun. The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, which is formed by lines from the Sun and Earth to the star at the distant vertex. Then the distance to the star could be calculated using trigonometry. The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni. The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit. Substituting the star's parallax for the one arcsecond angle in the imaginary right triangle, the long leg of the triangle will measure the distance from the Sun to the star. A parsec can be defined as the length of the right triangle side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond. The use of the parsec as a unit of distance follows naturally from Bessel's method, because the distance in parsecs can be computed simply as the reciprocal of the parallax angle in arcseconds (i.e.: if the parallax angle is 1 arcsecond, the object is 1 pc from the Sun; if the parallax angle is 0.5 arcseconds, the object is 2 pc away; etc.). No trigonometric functions are required in this relationship because the very small angles involved mean that the approximate solution of the skinny triangle can be applied. Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance. He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal that stuck. Calculating the value of a parsec By the 2015 definition, of arc length subtends an angle of at the center of the circle of radius . That is, 1 pc = 1 au/tan() ≈ 206,264.8 au by definition. Converting from degree/minute/second units to radians, , and (exact by the 2012 definition of the au) Therefore, (exact by the 2015 definition) Therefore, (to the nearest metre). Approximately, In the diagram above (not to scale), S represents the Sun, and E the Earth at one point in its orbit (such as to form a right angle at S). Thus the distance ES is one astronomical unit (au). The angle SDE is one arcsecond ( of a degree) so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows: Because the astronomical unit is defined to be , the following can be calculated: Therefore, if ≈ , Then ≈ A corollary states that a parsec is also the distance from which a disc that is one au in diameter must be viewed for it to have an angular diameter of one arcsecond (by placing the observer at D and a disc spanning ES). Mathematically, to calculate distance, given obtained angular measurements from instruments in arcseconds, the formula would be: where θ is the measured angle in arcseconds, Distanceearth-sun is a constant ( or ). The calculated stellar distance will be in the same measurement unit as used in Distanceearth-sun (e.g. if Distanceearth-sun = , unit for Distancestar is in astronomical units; if Distanceearth-sun = , unit for Distancestar is in light-years). The length of the parsec used in IAU 2015 Resolution B2 (exactly astronomical units) corresponds exactly to that derived using the small-angle calculation. This differs from the classic inverse-tangent definition by about , i.e.: only after the 11th significant figure. As the astronomical unit was defined by the IAU (2012) as an exact length in metres, so now the parsec corresponds to an exact length in metres. To the nearest meter, the small-angle parsec corresponds to . Usage and measurement The parallax method is the fundamental calibration step for distance determination in astrophysics; however, the accuracy of ground-based telescope measurements of parallax angle is limited to about , and thus to stars no more than distant. This is because the Earth's atmosphere limits the sharpness of a star's image. Space-based telescopes are not limited by this effect and can accurately measure distances to objects beyond the limit of ground-based observations. Between 1989 and 1993, the Hipparcos satellite, launched by the European Space Agency (ESA), measured parallaxes for about stars with an astrometric precision of about , and obtained accurate measurements for stellar distances of stars up to away. ESA's Gaia satellite, which launched on 19 December 2013, is intended to measure one billion stellar distances to within s, producing errors of 10% in measurements as far as the Galactic Centre, about away in the constellation of Sagittarius. Distances in parsecs Distances less than a parsec Distances expressed in fractions of a parsec usually involve objects within a single star system. So, for example: One astronomical unit (au), the distance from the Sun to the Earth, is just under . The most distant space probe, Voyager 1, was from Earth . Voyager 1 took to cover that distance. The Oort cloud is estimated to be approximately in diameter Parsecs and kiloparsecs Distances expressed in parsecs (pc) include distances between nearby stars, such as those in the same spiral arm or globular cluster. A distance of is denoted by the kiloparsec (kpc). Astronomers typically use kiloparsecs to express distances between parts of a galaxy or within groups of galaxies. So, for example : Proxima Centauri, the nearest known star to Earth other than the Sun, is about away by direct parallax measurement. The distance to the open cluster Pleiades is () from us per Hipparcos parallax measurement. The centre of the Milky Way is more than from the Earth and the Milky Way is roughly across. ESO 383-76, one of the largest known galaxies, has a diameter of . The Andromeda Galaxy (M31) is about away from the Earth. Megaparsecs and gigaparsecs Astronomers typically express the distances between neighbouring galaxies and galaxy clusters in megaparsecs (Mpc). A megaparsec is one million parsecs, or about 3,260,000 light years. Sometimes, galactic distances are given in units of Mpc/h (as in "50/h Mpc", also written ""). h is a constant (the "dimensionless Hubble constant") in the range reflecting the uncertainty in the value of the Hubble constant H for the rate of expansion of the universe: . The Hubble constant becomes relevant when converting an observed redshift z into a distance d using the formula . One gigaparsec (Gpc) is one billion parsecs — one of the largest units of length commonly used. One gigaparsec is about , or roughly of the distance to the horizon of the observable universe (dictated by the cosmic microwave background radiation). Astronomers typically use gigaparsecs to express the sizes of large-scale structures such as the size of, and distance to, the CfA2 Great Wall; the distances between galaxy clusters; and the distance to quasars. For example: The Andromeda Galaxy is about from the Earth. The nearest large galaxy cluster, the Virgo Cluster, is about from the Earth. The galaxy RXJ1242-11, observed to have a supermassive black hole core similar to the Milky Way's, is about from the Earth. The galaxy filament Hercules–Corona Borealis Great Wall, currently the largest known structure in the universe, is about across. The particle horizon (the boundary of the observable universe) has a radius of about . Volume units To determine the number of stars in the Milky Way, volumes in cubic kiloparsecs (kpc3) are selected in various directions. All the stars in these volumes are counted and the total number of stars statistically determined. The number of globular clusters, dust clouds, and interstellar gas is determined in a similar fashion. To determine the number of galaxies in superclusters, volumes in cubic megaparsecs (Mpc3) are selected. All the galaxies in these volumes are classified and tallied. The total number of galaxies can then be determined statistically. The huge Boötes void is measured in cubic megaparsecs. In physical cosmology, volumes of cubic gigaparsecs (Gpc3) are selected to determine the distribution of matter in the visible universe and to determine the number of galaxies and quasars. The Sun is currently the only star in its cubic parsec, (pc3) but in globular clusters the stellar density could be from . The observational volume of gravitational wave interferometers (e.g., LIGO, Virgo) is stated in terms of cubic megaparsecs (Mpc3) and is essentially the value of the effective distance cubed.
Physical sciences
Length and distance
null
23336
https://en.wikipedia.org/wiki/Parchment
Parchment
Parchment is a writing material made from specially prepared untanned skins of animals—primarily sheep, calves, and goats. It has been used as a writing medium for over two millennia. By AD 400, most literature intended for preservation began to be transferred from papyrus to parchment. Vellum is a finer quality parchment made from the skins of young animals such as lambs and young calves. The generic term animal membrane is sometimes used by libraries and museums that wish to avoid distinguishing between parchment and vellum. Etymology and origin The word is derived from the Koinē Greek city name, Pergamum (or Pergamon, modern Bergama) in western Anatolia, where parchment was supposedly first developed around the second century BCE, probably as a substitute for papyrus. Parchment and vellum Today the term parchment is often used in non-technical contexts to refer to any animal skin, particularly goat, sheep or cow, that has been scraped or dried under tension. The term originally referred only to the skin of sheep and, occasionally, goats. The equivalent material made from calfskin, which was of finer quality, was known as vellum (from the Old French or , and ultimately from the Latin , meaning a calf); while the finest of all was uterine vellum, taken from a calf foetus or stillborn calf. Some authorities have sought to observe these distinctions strictly: for example, lexicographer Samuel Johnson in 1755, and master calligrapher Edward Johnston in 1906. However, when old books and documents are encountered it may be difficult, without scientific analysis, to determine the precise animal origin of a skin, either in terms of its species or in terms of the animal's age. In practice, therefore, there has long been considerable blurring of the boundaries between the different terms. In 1519, William Horman wrote in his Vulgaria: "That that we upon, and is made of , is called , , , ." In Shakespeare's Hamlet (written 1599–1602) the following exchange occurs: Lee Ustick, writing in 1936, commented: It is for these reasons that many modern conservators, librarians and archivists prefer to use either the broader term parchment, or the neutral term animal membrane. History The word parchment evolved (via the Latin and the French ) from the name of the city of Pergamon, which was a thriving center of parchment production during the Hellenistic period. The city so dominated the trade that a legend later arose which said that parchment had been invented in Pergamon to replace the use of papyrus which had become monopolized by the rival city of Alexandria. This account, originating in the writings of Pliny the Elder (Natural History, Book XIII, 69–70), is almost assuredly false because parchment had been in use in Anatolia and elsewhere long before the rise of Pergamon. Herodotus mentions writing on skins as common in his time, the 5th century BC; and in his Histories (v.58) he states that the Ionians of Asia Minor had been accustomed to give the name of skins () to books; this word was adapted by Hellenized Jews to describe scrolls. Writing on prepared animal skins had a long history in other cultures outside of the Greeks as well. David Diringer noted that "the first mention of Egyptian documents written on leather goes back to the Fourth Dynasty (c. 2550–2450 BC), but the earliest of such documents extant are: a fragmentary roll of leather of the Sixth Dynasty (c. 24th century BC), unrolled by Dr. H. Ibscher, and preserved in the Cairo Museum; a roll of the Twelfth Dynasty (c. 1990–1777 BC) now in Berlin; the mathematical text now in the British Museum (MS. 10250); and a document of the reign of Ramses II (early thirteenth century BC)." Civilizations such as the Assyrians and the Babylonians most commonly impressed their cuneiform on clay tablets, but they also wrote on parchment from the 6th century BC onward. By the fourth century AD, in cultures that traditionally used papyrus for writing, parchment began to become the new standard for use in manufacturing important books, and most works which wished to be preserved were eventually moved from papyrus to parchment. In the later Middle Ages, especially the 15th century, parchment was largely replaced by paper for most uses except luxury manuscripts, some of which were also on paper. New techniques in paper milling allowed it to be much cheaper than parchment; it was made of textile rags and of very high quality. Following the arrival of printing in the later fifteenth century AD, the supply of animal skins for parchment could not keep up with the demands of printers. There was a short period during the introduction of printing where parchment and paper were used at the same time, with parchment (in fact vellum) the more expensive luxury option, preferred by rich and conservative customers. Although most copies of the Gutenberg Bible are on paper, some were printed on parchment; 12 of the 48 surviving copies, with most incomplete. In 1490, Johannes Trithemius preferred the older methods, because "handwriting placed on parchment will be able to endure a thousand years. But how long will printing last, which is dependent on paper? For if ... it lasts for two hundred years that is a long time." In fact, high-quality paper from this period has survived 500 years or more very well, if kept in reasonable library conditions. Modern use Parchment (or vellum) continues to be use for ritual or legal reasons. Rabbinic literature traditionally maintains that the institution of employing parchment made of animal hides for the writing of ritual objects, as detailed below. In the United Kingdom, Acts of Parliament are still printed on vellum. The heyday of parchment use was during the medieval period, but there has been a growing revival of its use among artists since the late 20th century. Although parchment never stopped being used (primarily for governmental documents and diplomas) it had ceased to be a primary choice for artists' supports by the end of the 15th century Renaissance. This was partly due to its expense and partly due to its unusual working properties. Parchment consists mostly of collagen. When the water in paint media touches the parchment's surface, the collagen melts slightly, forming a raised bed for the paint, a quality highly prized by some artists. Parchment is also extremely affected by its environment and changes in humidity, which can cause buckling. Books with parchment pages were bound with strong wooden boards and clamped tightly shut by metal (often brass) clasps or leather straps; this acted to keep the pages pressed flat despite humidity changes. Such metal fittings continued to be found on books as decorative features even after the use of paper made them unnecessary. Some contemporary artists prize the changeability of parchment, noting that the material seems alive and like an active participant in making artwork. To support the needs of the revival of use by artists, a revival in the art of preparing individual skins is also underway. Hand-prepared skins are usually preferred by artists because they are more uniform in surface and have fewer oily spots – which can cause long-term cracking of paint – than mass-produced parchment, which is usually made for lamp shades, furniture, or other interior design purposes. Manufacture Parchment is prepared from pelt – i.e. wet, unhaired, and limed skin – by drying at ordinary temperatures under tension, most commonly on a wooden frame known as a stretching frame. Skinning, soaking, and dehairing After a carcass is skinned, the hide is soaked in water for about a day. This removes blood and grime and prepares the skin for a dehairing liquor. The dehairing liquor was originally made of rotted, or fermented, vegetable matter, like beer or other liquors, but by the Middle Ages a dehairing bath included lime. Today, the lime solution is occasionally sharpened by the use of sodium sulfide. The liquor bath would have been in wooden or stone vats and the hides stirred with a long wooden pole to avoid human contact with the alkaline solution. Sometimes the skins would stay in the dehairing bath for eight or more days depending how concentrated and how warm the solution was kept – dehairing could take up to twice as long in winter. The vat was stirred two or three times a day to ensure the solution's deep and uniform penetration. Replacing the lime water bath also sped the process up. However, if the skins were soaked in the liquor too long, they would be weakened and not able to stand the stretching required for parchment. Stretching After soaking in water to make the skins workable, the skins were placed on a stretching frame. A simple frame with nails would work well in stretching the pelts. The skins could be attached by wrapping small, smooth rocks in the skins with rope or leather strips. Both sides would be left open to the air so they could be scraped with a sharp, semi-lunar knife to remove the last of the hair and get the skin to the right thickness. The skins, which were made almost entirely of collagen, would form a natural glue while drying and once taken off the frame they would keep their form. The stretching aligned the fibres to be more nearly parallel to the surface. Treatments To make the parchment more aesthetically pleasing or more suitable for the scribes, special treatments were used. According to Reed there were a variety of these treatments. Rubbing pumice powder into the flesh side of parchment while it was still wet on the frame was used to make it smooth and to modify the surface to enable inks to penetrate more deeply. Powders and pastes of calcium compounds were also used to help remove grease so the ink would not run. To make the parchment smooth and white, thin pastes (starchgrain or staunchgrain) of lime, flour, egg whites and milk were rubbed into the skins. Meliora di Curci in her paper, "The History and Technology of Parchment Making", notes that parchment was not always white. "Cennini, a 15th-century craftsman provides recipes to tint parchment a variety of colours including purple, indigo, green, red and peach." The Early medieval Codex Argenteus and Codex Vercellensis, the Stockholm Codex Aureus and the Codex Brixianus give a range of luxuriously produced manuscripts all on purple vellum, in imitation of Byzantine examples, like the Rossano Gospels, Sinope Gospels and the Vienna Genesis, which at least at one time are believed to have been reserved for Imperial commissions. Many techniques for parchment repair exist, to restore creased, torn, or incomplete parchments. Reuse Between the seventh and the ninth centuries, many earlier parchment manuscripts were scrubbed and scoured to be ready for rewriting, and often the earlier writing can still be read. These recycled parchments are known as palimpsests. Jewish parchment The way in which parchment was processed (from hide to parchment) has undergone a tremendous evolution based on time and location. Parchment and vellum are not the sole methods of preparing animal skins for writing. In the Babylonian Talmud (Bava Batra 14B), Moses is described as having written the first Torah Scroll on the unsplit cow-hide called gevil. Parchment is still the only medium used by traditional religious Jews for Torah scrolls or tefilin and mezuzahs, and is produced by large companies in Israel. This usage is Sinaitic in origin, with special designations for different types of parchment such as gevil and klaf. For those uses, only hides of kosher animals are permitted. Since there are many requirements for it being fit for the religious use, the liming is usually processed under supervision of a qualified Rabbi. Additional uses of the term In some universities, the word parchment is still used to refer to the certificate (scroll) presented at graduation ceremonies, even though the modern document is printed on paper or thin card; although doctoral graduates may be given the option of having their scroll written by a calligrapher on vellum. Heriot-Watt University still uses goatskin parchment for their degrees. Plant-based parchment Vegetable (paper) parchment is made by passing a waterleaf (an unsized paper like blotters) made of pulp fibers into sulfuric acid. The sulfuric acid hydrolyses and solubilises the main natural organic polymer, cellulose, present in the pulp wood fibers. The paper web is then washed in water, which stops the hydrolysis of the cellulose and causes a kind of cellulose coating to form on the waterleaf. The final paper is dried. This coating is a natural non-porous cement, that gives to the vegetable parchment paper its resistance to grease and its semi-translucency. Other processes can be used to obtain grease-resistant paper, such as waxing the paper or using fluorine-based chemicals. Highly beating the fibers gives an even more translucent paper with the same grease resistance. Silicone and other coatings may also be applied to the parchment. A silicone-coating treatment produces a cross-linked material with high density, stability and heat resistance and low surface tension which imparts good anti-stick or release properties. Chromium salts can also be used to impart moderate anti-stick properties. Parchment craft Historians believe that parchment craft originated as an art form in Europe during the fifteenth or sixteenth centuries. Parchment craft at that time occurred principally in Catholic communities, where crafts persons created lace-like items such as devotional pictures and communion cards. The craft developed over time, with new techniques and refinements being added. Until the sixteenth century, parchment craft was a European art form. However, missionaries and other settlers relocated to South America, taking parchment craft with them. As before, the craft appeared largely among the Catholic communities. Often, young girls receiving their first communion received gifts of handmade parchment crafts. Although the invention of the printing press led to a reduced interest in hand made cards and items, by the eighteenth century, people were regaining interest in detailed handwork. Parchment cards became larger in size and crafters began adding wavy borders and perforations. In the nineteenth century, influenced by French romanticism, parchment crafters began adding floral themes and cherubs and hand embossing. Parchment craft today involves various techniques, including tracing a pattern with white or colored ink, embossing to create a raised effect, stippling, perforating, coloring and cutting. Parchment craft appears in hand made cards, as scrapbook embellishments, as bookmarks, lampshades, decorative small boxes, wall hangings and more. Technical analysis The radiocarbon dating techniques that are used on papyrus can be applied to parchment as well. They do not date the age of the writing but the preparation of the parchment itself. While it is feasibly possible also to radiocarbon date certain kinds of ink, it is extremely difficult to do due to the fact that they are generally present on the text only in trace amounts, and it is hard to get a carbon sample of them without the carbon in the parchment contaminating it. An article published in 2009 considered the possibilities of tracing the origin of medieval parchment manuscripts and codices through DNA analysis. The methodology would employ polymerase chain reaction to replicate a small DNA sample to a size sufficiently large for testing. The article discusses the use of DNA testing to estimate the age of the calf at the creation of the vellum parchment. A 2006 study revealed the genetic signature of several Greek manuscripts to have "goat-related sequences". Utilizing these techniques we may be able to determine whether related library materials were made from genetically related animals (perhaps from the same herd) and locate the vellum's origination. In 2020, it was reported that the species of several of the animals used to provide parchment for the Dead Sea Scrolls could be identified, and the relationship between skins obtained from the same animal inferred. The breakthrough was made possible by the use of whole genome sequencing.
Technology
Material and chemical
null
23337
https://en.wikipedia.org/wiki/Phobia
Phobia
A phobia is an anxiety disorder, defined by an irrational, unrealistic, persistent and excessive fear of an object or situation. Phobias typically result in a rapid onset of fear and are usually present for more than six months. Those affected go to great lengths to avoid the situation or object, to a degree greater than the actual danger posed. If the object or situation cannot be avoided, they experience significant distress. Other symptoms can include fainting, which may occur in blood or injury phobia, and panic attacks, often found in agoraphobia and emetophobia. Around 75% of those with phobias have multiple phobias. Phobias can be divided into specific phobias, social anxiety disorder, and agoraphobia. Specific phobias are further divided to include certain animals, natural environment, blood or injury, and particular situations. The most common are fear of spiders, fear of snakes, and fear of heights. Specific phobias may be caused by a negative experience with the object or situation in early childhood to early adulthood. Social phobia is when a person fears a situation due to worries about others judging them. Agoraphobia is a fear of a situation due to perceived difficulty or inability to escape. It is recommended that specific phobias be treated with exposure therapy, in which the person is introduced to the situation or object in question until the fear resolves. Medications are not helpful for specific phobias. Social phobia and agoraphobia may be treated with counseling, medications, or a combination of both. Medications used include antidepressants, benzodiazepines, or beta-blockers. Specific phobias affect about 6–8% of people in the Western world and 2–4% in Asia, Africa, and Latin America in a given year. Social phobia affects about 7% of people in the United States and 0.5–2.5% of people in the rest of the world. Agoraphobia affects about 1.7% of people. Women are affected by phobias about twice as often as men. The typical onset of a phobia is around 10–17, and rates are lower with increasing age. Those with phobias are more likely to attempt suicide. Classification Fear is an emotional response to a current perceived danger. This differs from anxiety which is a response in preparation of a future threat. Fear and anxiety often can overlap but this distinction can help identify subtle differences between disorders, as well as differentiate between a response that would be expected given a person's developmental stage and culture. ICD-11 The International Classification of Diseases (11th version: ICD-11) is a globally used diagnostic tool for epidemiology, health management and clinical purposes maintained by the World Health Organization (WHO). The ICD classifies phobic disorders under the category of mental, behavioural or neurodevelopmental disorders. The ICD-10 differentiates between Phobic anxiety disorders, such as Agoraphobia, and Other anxiety disorders, such as Generalized anxiety disorder. The ICD-11 merges both groups together as Anxiety or fear-related disorders. DSM-5 Most phobias are classified into 3 categories. According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), such phobias are considered subtypes of anxiety disorder. The categories are: Specific phobias: Fear of particular objects or situations that results in anxiety and avoidance. May lead to panic attacks if exposed to feared stimulus or in anticipation of encounter. A specific phobia may be further subdivided into five categories: animal, natural environment, situational, blood-injection-injury, and other. Agoraphobia: a generalized fear of leaving home or a small familiar 'safe' area and of possible panic attacks that might follow. Various specific phobias may also cause it, such as fear of open spaces, social embarrassment (social agoraphobia), fear of contamination (fear of germs, possibly complicated by obsessive–compulsive disorder) or PTSD (post-traumatic stress disorder) related to a trauma that occurred outdoors. Social anxiety disorder (SAD), also known as social phobia, is when the situation is feared out of a worrying about others judging them. Performance only is a subtype of social anxiety disorder Phobias vary in severity among individuals. Some individuals can avoid the subject and experience relatively mild anxiety over that fear. Others experience full-fledged panic attacks with all the associated impairing symptoms. Most individuals understand that their fear is irrational but cannot override their panic response. These individuals often report dizziness, loss of bladder or bowel control, tachypnea, feelings of pain, and shortness of breath. Causes Phobias may develop for a variety of reasons. Childhood experiences, past traumatic experiences, brain chemistry, genetics, or learned behavior, can all be reasons why phobias develop. There are even phobias that may run in families and be passed down from one generation to another. There are multiple theories about how phobias develop and likely occur due to a combination of environmental and genetic factors. The degree to whether environment or genetic influences have a more significant role varies by condition, with social anxiety disorder and agoraphobia having around a 50% heritability rate. Environmental Rachman proposed three pathways for the development of phobias: direct or classical conditioning (exposure to phobic stimulus), vicarious acquisition (seeing others experience phobic stimulus), and informational/instructional acquisition (learning about phobic stimulus from others). Classical conditioning Much of the progress in understanding the acquisition of fear responses in phobias can be attributed to classical conditioning (Pavlovian model). When an aversive stimulus and a neutral one are paired together, for instance, when an electric shock is given in a specific room, the subject can start to fear not only the shock but the room as well. In behavioral terms, the room is a conditioned stimulus (CS). When paired with an aversive unconditioned stimulus (UCS) (the shock), it creates a conditioned response (CR) (fear for the room) (CS+UCS=CR). For example, in case of the fear of heights (acrophobia), the CS is heights. Such as a balcony on the top floors of a high rise building. The UCS can originate from an aversive or traumatizing event in the person's life, such as almost falling from a great height. The original fear of nearly falling is associated with being high, leading to a fear of heights. In other words, the CS (heights) associated with the aversive UCS (almost falling) leads to the CR (fear). It is possible, however, to extinguish the CR, and reversing the effects of the CS and UCS. Repeatedly presenting the CS alone, without the UCS, can exinguish the CR. Though historically influential in the theory of fear acquisition, this direct conditioning model is not the only proposed way to acquire a phobia. This theory in fact has limitations as not everyone that has experienced a traumatic event develops a phobia and vice versa. Vicarious conditioning Vicarious fear acquisition is learning to fear something, not by a subject's own experience of fear, but by watching others, oftentimes a parent (observational learning). For instance, when a child sees a parent reacting fearfully to an animal, the child can also become afraid of the animal. Through observational learning, humans can learn to fear potentially dangerous objects—a reaction observed in other primates. A study on non-human primates, showed that the primates learned to fear snakes at a fast rate after watching parents' fearful reactions. An increase in fearful behaviours was observed as the non-human primates observed their parents' fearful reactions. Although observational learning has proven effective in creating reactions of fear and phobias, it has also been shown that by physically experiencing an event, increases the chance of fearful and phobic behaviours. In some cases, physically experiencing an event may increase the fear and phobia more than observing a fearful reaction of another human or non-human primate. Informational/Instructional acquisition Informational/instructional fear acquisition is learning to fear something by getting information. For instance, fearing electrical wire after hearing that touching it causes an electric shock. A conditioned fear response to an object or situation is not always a phobia. There must also be symptoms of impairment and avoidance. Impairment is defined as an inability to complete routine tasks, whether occupational, academic, or social. For example, an occupational impairment can result from acrophobia, from not taking a job solely because of its location on the top floor of a building, or socially not participating in an event at a theme park. The avoidance aspect is defined as behaviour that results in the omission of an aversive event that would otherwise occur, intending to prevent anxiety. Genetic With the completion of the Human Genome Project in 2003, much research has been completed looking at specific genes that may cause or contribute to medical conditions. Candidate genes were the focus of most of these studies until the past decade, when the cost and ability to perform genome-wide analyses became more available. The GLRB gene was identified as a possible target for agoraphobia. An area still in development is reviewing epigenetic components or the interaction of the environment on genes through methylation. A number of genes are being examined through this epigenetic lens which may be linked with social anxiety disorder, including MAOA, CRHR1, and OXTR. Each phobia related disorder has some degree of genetic susceptibility. Those with specific phobias are more likely to have first degree relatives with the same specific phobia. Similarly, social anxiety disorder is found two to six times more frequently in those with first degree relatives that have it versus those that do not. Agoraphobia is believed to have the strongest genetic association. Mechanism Limbic system Beneath the lateral fissure in the cerebral cortex, the insula, or insular cortex, of the brain has been identified as part of the limbic system, along with the cingulated gyrus, hippocampus, corpus callosum, and other nearby cortices. This system has been found to play a role in emotion processing, and the insula, in particular, may contribute to maintaining autonomic functions. Studies by Critchley et al. indicate the insula as being involved in the experience of emotion by detecting and interpreting threatening stimuli. Similar studies monitoring insula activity have shown a correlation between increased insular activation and anxiety. In the frontal lobes, other cortices involved with phobia and fear are the anterior cingulate cortex and the medial prefrontal cortex. In the processing of emotional stimuli, studies on phobic reactions to facial expressions have indicated that these areas are involved in the processing and responding to negative stimuli. The ventromedial prefrontal cortex has been said to influence the amygdala by monitoring its reaction to emotional stimuli or even fearful memories. Most specifically, the medial prefrontal cortex is active during the extinction of fear and is responsible for long-term extinction. Stimulation of this area decreases conditioned fear responses, so its role may be in inhibiting the amygdala and its reaction to fearful stimuli. The hippocampus is a horseshoe-shaped structure that plays an essential part in the brain's limbic system. This is because it forms memories and connects them with emotions and the senses. When dealing with fear, the hippocampus receives impulses from the amygdala that allow it to connect the fear with a certain sense, such as a smell or sound. Amygdala The amygdala is an almond-shaped mass of nuclei located deep in the brain's medial temporal lobe. It processes the events associated with fear and is linked to social phobia and other anxiety disorders. The amygdala's ability to respond to fearful stimuli occurs through fear conditioning. Like classical conditioning, the amygdala learns to associate a conditioned stimulus with a negative or avoidant stimulus, creating a conditioned fear response often seen in phobic individuals. The amygdala is responsible for recognizing certain stimuli or cues as dangerous and plays a role in the storage of threatening stimuli to memory. The basolateral nuclei (or basolateral amygdala) and the hippocampus interact with the amygdala in-memory storage. This connection suggests why memories are often remembered more vividly if they have emotional significance. In addition to memory, the amygdala also triggers the secretion of hormones that affect fear and aggression. When the fear or aggression response is initiated, the amygdala releases hormones into the body to put the human body into an "alert" state, which prepares the individual to move, run, fight, etc. This defensive "alert" state and response are known as the fight-or-flight response. However, inside the brain, this stress response can be observed in the hypothalamic-pituitary-adrenal axis (HPA). This circuit incorporates the process of receiving stimuli, interpreting them, and releasing certain hormones into the bloodstream. The parvocellular neurosecretory neurons of the hypothalamus release corticotropin-releasing hormone (CRH), which is sent to the anterior pituitary. Here the pituitary releases adrenocorticotropic hormone (ACTH), which ultimately stimulates the release of cortisol. In relation to anxiety, the amygdala activates this circuit, while the hippocampus is responsible for suppressing it. Glucocorticoid receptors in the hippocampus monitor the amount of cortisol in the system and through negative feedback can tell the hypothalamus to stop releasing CRH. Studies on mice engineered to have high concentrations of CRH showed higher levels of anxiety, while those engineered to have no or low amounts of CRH receptors were less anxious. In people with phobias, therefore, high amounts of cortisol may be present, or there may be low levels of glucocorticoid receptors or even serotonin (5-HT). Disruption by damage For the areas in the brain involved in emotion - most specifically, fear - the processing and response to emotional stimuli can be altered when there are damage to any of these regions. Damage to the cortical areas involved in the limbic system, such as the cingulate cortex or frontal lobes, has resulted in extreme emotion changes. Other types of damage include Klüver–Bucy syndrome and Urbach–Wiethe disease. In Klüver–Bucy syndrome, a temporal lobectomy, or removal of the temporal lobes, results in changes involving fear and aggression. Specifically, the removal of these lobes results in decreased fear, confirming its role in fear recognition and response. Damage to both side (Bilateral damage) of the medial temporal lobes is known as Urbach–Wiethe disease. It presents with similar symptoms of decreased fear and aggression but with the addition of the inability to recognize emotional expressions, especially angry or fearful faces. The amygdala's role in learned fear includes interactions with other brain regions in the neural circuit of fear. While damage in the amygdala can inhibit its ability to recognize fearful stimuli, other areas such as the ventromedial prefrontal cortex and the basolateral nuclei of the amygdala can affect the region's ability to not only become conditioned to fearful stimuli but to extinguish them eventually. Through receiving stimulus info, the basolateral nuclei undergo synaptic changes that allow the amygdala to develop a conditioned response to fearful stimuli. Damage to this area, therefore, have been shown to disrupt the acquisition of learned responses to fear. Likewise, damage in the ventromedial prefrontal cortex (the area responsible for monitoring the amygdala) has been shown to slow down the speed of extinguishing a learned fear response and how effective the extinction is. This suggests there is a pathway or circuit among the amygdala and nearby cortical areas that process emotional stimuli and influence emotional expression, all of which can be disrupted when damage occurs. Diagnosis It is recommended that the terms distress and impairment take into account the context of the person's environment during diagnosis. The DSM-IV-TR states that if a feared stimulus, whether it be an object or a situation, is absent entirely in an environment, a diagnosis cannot be made. An example of this situation would be an individual who has a fear of mice but lives in an area without mice. Even though the concept of mice causes marked distress and impairment within the individual, because the individual does not usually encounter mice, no actual distress or impairment is ever experienced. It is recommended that proximity to, and ability to escape from, the stimulus also be considered. As the phobic person approaches a feared stimulus, anxiety levels increase, and the degree to which the person perceives they might escape from the stimulus affects the intensity of fear in instances such as riding an elevator (e.g. anxiety increases at the midway point between floors and decreases when the floor is reached and the doors open). The DSM-V has been updated to reflect that an individual may have changed their daily activities around the feared stimulus in such a way that they may avoid it altogether. The person may still meet criteria for the diagnosis if they continue to avoid or refuse to participate in activities they would involve possible exposure to the phobic stimulus. Specific phobias A specific phobia is a marked and persistent fear of an object or situation. Specific phobias may also include fear of losing control, panicking, and fainting from an encounter with the phobia. Specific phobias are defined concerning objects or situations, whereas social phobias emphasize social fear and the evaluations that might accompany them. The DSM breaks specific phobias into five subtypes: animal, natural environment, blood-injection-injury, situational and other. In children, blood-injection-injury phobia, animal phobias, and natural environment phobias usually develop between the ages of 7 and 9 reflective of normal development. Additionally, specific phobias are most prevalent in children between the ages 10 and 13. Situational phobias are typically found in older children and adults. Treatments There are various methods used to treat phobias. These methods include systematic desensitization, progressive relaxation, virtual reality, modeling, medication, and hypnotherapy. Over the past several decades, psychologists and other researchers have developed effective behavioral, pharmacological, and technological interventions for the treatment of phobia. Virtual Reality treatments produce similar effects to in vivo exposure, another efficacious therapy great for treating phobias. Although Virtual Reality is great for treating phobias, the treatment will not work for every phobia. The treatment has positive effects, but depending on the phobia, in vivo would be another ideal treatment to use over Virtual Reality. In vivo exposure is a great way to reduce fear over time and is actually more preferred when trying to treat anxiety and fear related problems. Therapy Cognitive behavioral therapy (CBT) is an evidence-based treatment that can help with phobias. It is a talk therapy that can be used alone or along with other therapies. CBT is there to help manage stressful situations and respond better. This therapy requires the person to be honest with themselves and confront their feelings and phobias. Cognitive Behavioral Therapy can be beneficial by allowing the person to challenge dysfunctional thoughts or beliefs by being mindful of their feelings to recognize that their fear is irrational. CBT may occur in a group setting. Gradual desensitization treatment and CBT are often successful, provided the person is willing to endure some discomfort. In one clinical trial, 90% of people no longer had a phobic reaction after successful CBT treatment. Research in the UK has suggested that for childhood phobias a single session of CBT can be effective. Evidence supports that eye movement desensitization and reprocessing (EMDR) is effective in treating some phobias. Its effectiveness in treating complex or trauma-related phobias has not been empirically established. Primarily used to treat post-traumatic stress disorder, EMDR has been demonstrated to ease phobia symptoms following a specific trauma, such as a fear of dogs following a dog bite. Systematic desensitization Systematic desensitization is a process in which people seeking help slowly become accustomed to their phobia, and ultimately overcome it. Traditional systematic desensitization involves a person being exposed to the object they are afraid of over time so that the fear and discomfort do not become overwhelming. This controlled exposure to the anxiety-provoking stimulus is key to the effectiveness of exposure therapy in the treatment of specific phobias. It has been shown that humor is an excellent alternative when traditional systematic desensitization is ineffective. Humor systematic desensitization involves a series of treatment activities that elicit humor with the feared object. Previously learned progressive muscle relaxation procedures can be used as the activities become more difficult. Progressive muscle relaxation helps people relax before and during exposure to the feared stimulus. Virtual reality therapy is another technique that helps phobic people confront a feared object. It uses virtual reality to generate scenes that may not have been possible or ethical in the physical world. It is equally as effective as traditional exposure therapy and offers additional advantages. These include controlling the scenes and having the phobic person endure more exposure than they might handle in reality. Medications Medications are a treatment option often utilized in combination with CBT or if CBT was not tolerated or effective. Medications can help regulate apprehension and fear of a particular fearful object or situation. There are various medication options available for both social anxiety disorder and agoraphobia. The use of medications for specific phobias, besides the limited role of benzodiazepines, do not currently have established guidelines due to minimal supporting evidence. Antidepressants Antidepressant medications such as selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), or monoamine oxidase inhibitors (MAOIs) may be helpful in some cases. SSRIs / SNRIs act on serotonin, a neurotransmitter in the brain. Because of serotonin's positive impacts on mood, an antidepressant may be offered and prescribed as a treatment option. For social anxiety, the SSRIs sertraline, paroxetine, fluvoxamine, and the SNRI venlafaxine have FDA approval. Similar medications may be offered for agoraphobia. Benzodiazepines Sedatives such as benzodiazepines (clonazepam, alprazolam) are another therapeutic option, which can help people relax by reducing the amount of anxiety they feel. Benzodiazepines may be useful in the acute treatment of severe symptoms, but the risk-benefit ratio usually goes against their long-term use in phobic disorders. This class of medication has recently been shown as effective if used with negative behaviours such as excessive alcohol use. Despite this positive finding, benzodiazepines are used with caution due to side effects and risk of developing dependence or withdrawal symptoms. In specific phobia for example if the phobic stimulus is one that is not regularly encountered such as flying a short course may be provided. Beta-blockers Beta blockers (propranolol) are another therapeutic option, particularly for those with the performance only subtype of social anxiety disorder. They may stop the stimulating effects of adrenaline, such as sweating, increased heart rate, elevated blood pressure, tremors, and the feeling of a pounding heart. By taking beta-blockers before a phobic event, these symptoms are decreased, making the event less frightening. Beta-blockers are not effective for generalized social anxiety disorder. Hypnotherapy Hypnotherapy is another effective therapy that uses hypnosis to help manage anxiety and stress. This therapy can help people gain control over their phobias. Hypnotherapy can be used alone and in conjunction with systematic desensitization to treat phobias. Through hypnotherapy, the underlying cause of the phobia may be uncovered. The phobia may be caused by a past event that the person does not remember, a phenomenon known as repression. The mind represses traumatic memories from the conscious mind until the person is ready to deal with them. Hypnotherapy may also eliminate the conditioned responses that occur during different situations. People are first placed into a hypnotic trance, an extremely relaxed state in which the unconscious can be retrieved. This state makes people more open to suggestion, which helps bring about desired change. Consciously addressing old memories helps individuals understand the event and see it less threateningly. Prognosis Outcomes vary widely among the phobic anxiety disorders. There is a possibility that remission occurs without intervention but relapses are common. Response to treatment as well as remission and relapse rates are impacted by the severity of an individual's disorder as well as how long they have been experiencing symptoms. For example, in social anxiety disorder (social phobia) a majority of individuals will experience remission within the first couple of years of symptom onset without specific treatment. On the other hand, in Agoraphobia as few as 10% of individuals are seen to reach complete remission without treatment. A study looking at the 2 year remission rates for anxiety disorders found that those with multiple anxieties were less likely to experience remission. Specific phobia The majority of those that develop a specific phobia first experience symptoms in childhood. Often individuals will experience symptoms periodically with periods of remission before complete remission occurs. However, specific phobias that continue into adulthood are likely to experience a more chronic course. Specific phobias in older adults has been linked with a decrease in quality of life. Those with specific phobias are at an increased risk of suicide. Greater impairment is found in those that have multiple phobias. Response to treatment is relatively high but many do not seek treatment due to lack of access, ability to avoid phobia, or unwilling to face feared object for repeated CBT sessions. Comorbidities Many of those with a phobia often have more than one phobia. There are also a number of psychological and physiological disorders that tend to occur or coexist at higher rates among this population. As with all anxiety disorders the most common psychiatric condition to occur with a phobia is major depressive disorder. Additionally bipolar disorder, substance dependence disorder, obsessive-compulsive disorder, and post traumatic stress disorder have also been found to occur in those with phobias at higher rates. Epidemiology Phobias are a common form of anxiety disorder, and distributions are heterogeneous by age and gender. An American study by the National Institute of Mental Health (NIMH) found that between 8.7 percent and 18.1 percent of Americans have phobias, making it the most common mental illness among women in all age groups and the second most common illness among men older than 25. Between 4 percent and 10 percent of all children experience specific phobias during their lives, and social phobias occur in one percent to three percent of children. A Swedish study found that females have a higher number of cases per year than males (26.5 percent for females and 12.4 percent for males). Among adults, 21.2 percent of women and 10.9 percent of men have a single specific phobia, while multiple phobias occur in 5.4 percent of females and 1.5 percent of males. Women are nearly four times as likely as men to have a fear of animals (12.1 percent in women and 3.3 percent in men) — a higher dimorphic than with all specific or generalized phobias or social phobias. Social phobias are more common in girls than boys, while situational phobia occurs in 17.4 percent of women and 8.5 percent of men. History In the 9th century, Islamic polymath Abu Zayd al-Balkhi (850-934) was likely the first to identify phobias accurately. In his treatise Sustenance of the Body and Soul, Al Balkhi described phobia as a psychological disorder that may manifest with physical symptoms such as paleness of the skin and trembling of the hands. Remarkably, Al-Balkhi not only recognised phobias as psychological in nature but also proposed a treatment approach that included cognitive techniques and exposure therapy. He recommended that individuals gradually expose themselves to feared stimuli and train themselves to tolerate the experience until they reach habituation, an approach that mirrors modern therapeutic techniques for treating phobias. This is an exceptional accomplishment considering that the physical symptoms of phobias were mistakenly grouped under physical rubrics in Western medical textbooks and were not believed to be associated with phobias until the 19th century. The Western understanding of phobias as a physical condition was influenced by a combination of medical dogma and a limited understanding of psychology and mental health. This view persisted from antiquity through the Renaissance and into the 19th century, until more nuanced psychological frameworks were developed. In the early history of Western medicine, mental and emotional disturbances, including phobias, were often viewed through a physiological lens, with causes linked to physical imbalances. Hippocrates (460–370 BCE), the father of medicine, proposed that mental health issues were caused by imbalances in the four humors (blood, phlegm, yellow bile, and black bile), and emotional conditions like fear were incorrectly seen as physical symptoms of these imbalances. Galen, a Roman physician, expanded this idea, attributing mental disturbances to bodily humors and brain function. In the Middle Ages, medical explanations shifted to spiritual causes, with mental disorders seen as linked to demonic possession or divine punishment. Physical symptoms, like trembling or paleness, were often misattributed to fever or other bodily ailments rather than psychological distress. By the Early Modern period (16th–17th centuries), interest in neurology grew, but mental illnesses, including phobias, were still primarily seen as physical conditions. Treatments like bloodletting or purging were common, reflecting the belief that emotional symptoms stemmed from bodily imbalances rather than psychological processes. The 19th century marked a shift as psychological models began to re-emerge. Jean-Martin Charcot and Sigmund Freud explored the mental roots of phobias, though Freud still linked them to unconscious conflicts. The emergence of behavioural psychology, particularly John B. Watson's work on conditioned fear responses, began to highlight the psychological basis of phobias. However, theories by Charcot, Freud and Watson were still not as robust as Al-Balkhi's theory of phobias proposed almost a millennia earlier. By the 20th century, the understanding of phobias in the West evolved integrating emotional, cognitive, and biological components, largely aligning with Al-Balkhi's holistic view of phobias as a psychological disorder. Society and culture Terminology The word phobia comes from the (phóbos), meaning "fear" or "morbid fear". The regular system for naming specific phobias uses prefixes based on a Greek word for the object of the fear, plus the suffix -phobia. Benjamin Rush's 1786 satirical text, 'On the different Species of Phobia', established the term's dictionary sense of specific morbid fears. However, many phobias are irregularly named with Latin prefixes, such as apiphobia instead of melissaphobia (fear of bees) or aviphobia instead of ornithophobia (fear of birds). Creating these terms is something of a word game. Such fears are psychological rather than physiological in origin, and few of these terms are found in medical literature. In ancient Greek mythology Phobos was the twin brother of Deimos (terror). The word phobia may also refer to conditions other than true phobias. For example, the term hydrophobia is an old name for rabies, since an aversion to water is one of that disease's symptoms. A specific phobia to water is called aquaphobia instead. A hydrophobe is a chemical compound that repels water. Similarly, photophobia usually refers to a physical complaint (aversion to light due to inflamed eyes or excessively dilated pupils), rather than an irrational fear of light. Non-medical, deterrent and political use Several terms with the suffix -phobia are used non-clinically to imply irrational fear or hatred. Examples include: Chemophobia – Irrational fear or hatred of chemistry and synthetic chemicals Technophobia – Irrational fear of or discomfort with electronics Xenophobia – Irrational fear or hatred of foreigners, strangers or the unknown, sometimes used to describe anti-immigration nationalistic political beliefs and movements Oikophobia - Irrational fear or hatred of one's home, home country, or home culture Homophobia – Irrational fear or hatred of homosexuality or people who identify or perceived as being lesbian, gay, bisexual or transgender (LGBT) Islamophobia – Irrational fear or hatred of Islam Hinduphobia – Irrational fear or hatred for Hindus or Hinduism. Indophobia – Irrational fear or hatred of Indian people Biphobia – Irrational fear or hatred of bisexual people Transphobia – Irrational fear or hatred of transgender people Christophobia – Irrational fear or hatred of Christianity or Jesus Christ Judeophobia - Irrational fear or hatred of Jews Europhobia - Irrational fear or hatred of Europe, the culture, or peoples of Europe Romanophobia - Irrational fear or hatred of Romanians Usually, these kinds of "phobias" are described as fear, dislike, disapproval, prejudice, hatred, discrimination, or hostility towards the object of the "phobia". It is a form of hyperbole. Popular culture A number of films and TV shows have portrayed individuals with a variety of phobic disorders. Movies Benchwarmers – Howie Goodman (Nick Swardson) is portrayed as being agoraphobic and heliophobic. Television shows Game On - Matthew Malone (portrayed by Ben Chaplin, then Neil Stuke) is an agoraphobe, sharing a flat with two childhood friends. Monk – Adrian Monk (Tony Shalhoub) is a former homicide detective and a consultant for the San Francisco Police Department. He has an extreme case of OCD, and is well known for his various fears and phobias, including (but certainly not limited to) heights, snakes, elephants, crowds, glaciers, rodeos, wind, and milk. Shameless (American TV series) – Sheila Jackson (Joan Cusack) has agoraphobia and mysophobia (fear of germs). Research directions Before the development of pharmacotherapy, the treatment of phobias and mental health disorders relied solely on therapy such as CBT. Although therapy can be incredibly effective for many, it does not always achieve the desired effect. Interventional psychiatry is an additional branch in medicine that has expanded treatment options, and further research continues to explore effectiveness and applications. Electroconvulsive therapy (ECT) and transcranial magnetic stimulation (TMS) are two examples of device-based interventions widely utilized. In terms of use in treating phobias and anxiety disorders as a whole, TMS is being explored as an augmentation option for those who do not have the desired response to other therapeutic options or side effects from medications. A majority of research has been conducted exploring the use of TMS in PTSD and generalized anxiety disorder. A meta‐analysis conducted in 2019 found only two clinical trials for the use of TMS in specific phobias, one of which explored anxiety and avoidance rates in individuals with acrophobia. Although the study found decreased rates in both anxiety and avoidance after two TMS sessions because of the limited number of studies and small sample size, few conclusions can be made. D-cycloserine (DCS), a partial N-methyl-D-aspartate agonist, is an additional investigational approach to augmentation specific phobias that a meta-analysis suggested had better outcomes and less symptom severity when utilized before initiating CBT.
Biology and health sciences
Mental disorder
null
23470
https://en.wikipedia.org/wiki/Polyhedron
Polyhedron
In geometry, a polyhedron (: polyhedra or polyhedrons; ) is a three-dimensional figure with flat polygonal faces, straight edges and sharp corners or vertices. A convex polyhedron is a polyhedron that bounds a convex set. Every convex polyhedron can be constructed as the convex hull of its vertices, and for every finite set of points, not all on the same plane, the convex hull is a convex polyhedron. Cubes and pyramids are examples of convex polyhedra. A polyhedron is a generalization of a 2-dimensional polygon and a 3-dimensional specialization of a polytope, a more general concept in any number of dimensions. Definition Convex polyhedra are well-defined, with several equivalent standard definitions. However, the formal mathematical definition of polyhedra that are not required to be convex has been problematic. Many definitions of "polyhedron" have been given within particular contexts, some more rigorous than others, and there is no universal agreement over which of these to choose. Some of these definitions exclude shapes that have often been counted as polyhedra (such as the self-crossing polyhedra) or include shapes that are often not considered as valid polyhedra (such as solids whose boundaries are not manifolds). As Branko Grünbaum observed, Nevertheless, there is general agreement that a polyhedron is a solid or surface that can be described by its vertices (corner points), edges (line segments connecting certain pairs of vertices), faces (two-dimensional polygons), and that it sometimes can be said to have a particular three-dimensional interior volume. One can distinguish among these different definitions according to whether they describe the polyhedron as a solid, whether they describe it as a surface, or whether they describe it more abstractly based on its incidence geometry. A common and somewhat naive definition of a polyhedron is that it is a solid whose boundary can be covered by finitely many planes or that it is a solid formed as the union of finitely many convex polyhedra. Natural refinements of this definition require the solid to be bounded, to have a connected interior, and possibly also to have a connected boundary. The faces of such a polyhedron can be defined as the connected components of the parts of the boundary within each of the planes that cover it, and the edges and vertices as the line segments and points where the faces meet. However, the polyhedra defined in this way do not include the self-crossing star polyhedra, whose faces may not form simple polygons, and some of whose edges may belong to more than two faces. Definitions based on the idea of a bounding surface rather than a solid are also common. For instance, defines a polyhedron as a union of convex polygons (its faces), arranged in space so that the intersection of any two polygons is a shared vertex or edge or the empty set and so that their union is a manifold. If a planar part of such a surface is not itself a convex polygon, O'Rourke requires it to be subdivided into smaller convex polygons, with flat dihedral angles between them. Somewhat more generally, Grünbaum defines an acoptic polyhedron to be a collection of simple polygons that form an embedded manifold, with each vertex incident to at least three edges and each two faces intersecting only in shared vertices and edges of each. Cromwell's Polyhedra gives a similar definition but without the restriction of at least three edges per vertex. Again, this type of definition does not encompass the self-crossing polyhedra. Similar notions form the basis of topological definitions of polyhedra, as subdivisions of a topological manifold into topological disks (the faces) whose pairwise intersections are required to be points (vertices), topological arcs (edges), or the empty set. However, there exist topological polyhedra (even with all faces triangles) that cannot be realized as acoptic polyhedra. One modern approach is based on the theory of abstract polyhedra. These can be defined as partially ordered sets whose elements are the vertices, edges, and faces of a polyhedron. A vertex or edge element is less than an edge or face element (in this partial order) when the vertex or edge is part of the edge or face. Additionally, one may include a special bottom element of this partial order (representing the empty set) and a top element representing the whole polyhedron. If the sections of the partial order between elements three levels apart (that is, between each face and the bottom element, and between the top element and each vertex) have the same structure as the abstract representation of a polygon, then these partially ordered sets carry exactly the same information as a topological polyhedron. However, these requirements are often relaxed, to instead require only that sections between elements two levels apart have the same structure as the abstract representation of a line segment. (This means that each edge contains two vertices and belongs to two faces, and that each vertex on a face belongs to two edges of that face.) Geometric polyhedra, defined in other ways, can be described abstractly in this way, but it is also possible to use abstract polyhedra as the basis of a definition of geometric polyhedra. A realization of an abstract polyhedron is generally taken to be a mapping from the vertices of the abstract polyhedron to geometric points, such that the points of each face are coplanar. A geometric polyhedron can then be defined as a realization of an abstract polyhedron. Realizations that omit the requirement of face planarity, that impose additional requirements of symmetry, or that map the vertices to higher dimensional spaces have also been considered. Unlike the solid-based and surface-based definitions, this works perfectly well for star polyhedra. However, without additional restrictions, this definition allows degenerate or unfaithful polyhedra (for instance, by mapping all vertices to a single point) and the question of how to constrain realizations to avoid these degeneracies has not been settled. In all of these definitions, a polyhedron is typically understood as a three-dimensional example of the more general polytope in any number of dimensions. For example, a polygon has a two-dimensional body and no faces, while a 4-polytope has a four-dimensional body and an additional set of three-dimensional "cells". However, some of the literature on higher-dimensional geometry uses the term "polyhedron" to mean something else: not a three-dimensional polytope, but a shape that is different from a polytope in some way. For instance, some sources define a convex polyhedron to be the intersection of finitely many half-spaces, and a polytope to be a bounded polyhedron. The remainder of this article considers only three-dimensional polyhedra. Convex polyhedra A convex polyhedron is a polyhedron that forms a convex set as a solid. That being said, it is a three-dimensional solid whose every line segment connects two of its points lies its interior or on its boundary; none of its faces are coplanar (they do not share the same plane) and none of its edges are collinear (they are not segments of the same line). A convex polyhedron can also be defined as a bounded intersection of finitely many half-spaces, or as the convex hull of finitely many points, in either case, restricted to intersections or hulls that have nonzero volume. Important classes of convex polyhedra include the family of prismatoid, the Platonic solids, the Archimedean solids and their duals the Catalan solids, and the regular polygonal faces polyhedron. The prismatoids are the polyhedron whose vertices lie on two parallel planes and their faces are likely to be trapezoids and triangles. Examples of prismatoids are pyramids, wedges, parallelipipeds, prisms, antiprisms, cupolas, and frustums. The Platonic solids are the five ancientness polyhedrons—tetrahedron, octahedron, icosahedron, cube, and dodecahedron—classified by Plato in his Timaeus whose connecting four classical elements of nature. The Archimedean solids are the class of thirteen polyhedrons whose faces are all regular polygons and whose vertices are symmetric to each other; their dual polyhedrons are Catalan solids. The class of regular polygonal faces polyhedron are the deltahedron (whose faces are all equilateral triangles and Johnson solids (whose faces are arbitrary regular polygons). The convex polyhedron can be categorized into elementary polyhedron or composite polyhedron. An elementary polyhedron is a convex regular-faced polyhedron that cannot be produced into two or more polyhedrons by slicing it with a plane. Quite opposite to a composite polyhedron, it can be alternatively defined as a polyhedron that can be constructed by attaching more elementary polyhedrons. For example, triaugmented triangular prism is a composite polyhedron since it can be constructed by attaching three equilateral square pyramids onto the square faces of a triangular prism; the square pyramids and the triangular prism are elementary. A midsphere of a convex polyhedron is a sphere tangent to every edge of a polyhedron, an intermediate sphere in radius between the insphere and circumsphere, for polyhedra for which all three of these spheres exist. Every convex polyhedron is combinatorially equivalent to a canonical polyhedron, a polyhedron that has a midsphere whose center coincides with the centroid of the polyhedron. The shape of the canonical polyhedron (but not its scale or position) is uniquely determined by the combinatorial structure of the given polyhedron. Some polyhedrons do not have the property of convexity, and they are called non-convex polyhedrons. Such polyhedrons are star polyhedrons and Kepler–Poinsot polyhedrons, which constructed by either stellation (process of extending the faces—within their planes—so that they meet) or faceting (whose process of removing parts of a polyhedron to create new faces—or facets—without creating any new vertices). A facet of a polyhedron is any polygon whose corners are vertices of the polyhedron, and is not a face. The stellation and faceting are inverse or reciprocal processes: the dual of some stellation is a faceting of the dual to the original polyhedron. Characteristics Number of faces Polyhedra may be classified and are often named according to the number of faces. The naming system is based on Classical Greek, and combines a prefix counting the faces with the suffix "hedron", meaning "base" or "seat" and referring to the faces. For example a tetrahedron is a polyhedron with four faces, a pentahedron is a polyhedron with five faces, a hexahedron is a polyhedron with six faces, etc. For a complete list of the Greek numeral prefixes see , in the column for Greek cardinal numbers. The names of tetrahedra, hexahedra, octahedra (8-sided polyhedra), dodecahedra (12-sided polyhedra), and icosahedra (20-sided polyhedra) are sometimes used without additional qualification to refer to the Platonic solids, and sometimes used to refer more generally to polyhedra with the given number of sides without any assumption of symmetry. Topological classification Some polyhedra have two distinct sides to their surface. For example, the inside and outside of a convex polyhedron paper model can each be given a different colour (although the inside colour will be hidden from view). These polyhedra are orientable. The same is true for non-convex polyhedra without self-crossings. Some non-convex self-crossing polyhedra can be coloured in the same way but have regions turned "inside out" so that both colours appear on the outside in different places; these are still considered to be orientable. However, for some other self-crossing polyhedra with simple-polygon faces, such as the tetrahemihexahedron, it is not possible to colour the two sides of each face with two different colours so that adjacent faces have consistent colours. In this case the polyhedron is said to be non-orientable. For polyhedra with self-crossing faces, it may not be clear what it means for adjacent faces to be consistently coloured, but for these polyhedra it is still possible to determine whether it is orientable or non-orientable by considering a topological cell complex with the same incidences between its vertices, edges, and faces. A more subtle distinction between polyhedron surfaces is given by their Euler characteristic, which combines the numbers of vertices , edges , and faces of a polyhedron into a single number defined by the formula The same formula is also used for the Euler characteristic of other kinds of topological surfaces. It is an invariant of the surface, meaning that when a single surface is subdivided into vertices, edges, and faces in more than one way, the Euler characteristic will be the same for these subdivisions. For a convex polyhedron, or more generally any simply connected polyhedron with surface a topological sphere, it always equals 2. For more complicated shapes, the Euler characteristic relates to the number of toroidal holes, handles or cross-caps in the surface and will be less than 2. All polyhedra with odd-numbered Euler characteristic are non-orientable. A given figure with even Euler characteristic may or may not be orientable. For example, the one-holed toroid and the Klein bottle both have , with the first being orientable and the other not. For many (but not all) ways of defining polyhedra, the surface of the polyhedron is required to be a manifold. This means that every edge is part of the boundary of exactly two faces (disallowing shapes like the union of two cubes that meet only along a shared edge) and that every vertex is incident to a single alternating cycle of edges and faces (disallowing shapes like the union of two cubes sharing only a single vertex). For polyhedra defined in these ways, the classification of manifolds implies that the topological type of the surface is completely determined by the combination of its Euler characteristic and orientability. For example, every polyhedron whose surface is an orientable manifold and whose Euler characteristic is 2 must be a topological sphere. A toroidal polyhedron is a polyhedron whose Euler characteristic is less than or equal to 0, or equivalently whose genus is 1 or greater. Topologically, the surfaces of such polyhedra are torus surfaces having one or more holes through the middle. Duality For every convex polyhedron, there exists a dual polyhedron having faces in place of the original's vertices and vice versa, and the same number of edges. The dual of a convex polyhedron can be obtained by the process of polar reciprocation. Dual polyhedra exist in pairs, and the dual of a dual is just the original polyhedron again. Some polyhedra are self-dual, meaning that the dual of the polyhedron is congruent to the original polyhedron. Abstract polyhedra also have duals, obtained by reversing the partial order defining the polyhedron to obtain its dual or opposite order. These have the same Euler characteristic and orientability as the initial polyhedron. However, this form of duality does not describe the shape of a dual polyhedron, but only its combinatorial structure. For some definitions of non-convex geometric polyhedra, there exist polyhedra whose abstract duals cannot be realized as geometric polyhedra under the same definition. Vertex figures For every vertex one can define a vertex figure, which describes the local structure of the polyhedron around the vertex. Precise definitions vary, but a vertex figure can be thought of as the polygon exposed where a slice through the polyhedron cuts off a vertex. For the Platonic solids and other highly-symmetric polyhedra, this slice may be chosen to pass through the midpoints of each edge incident to the vertex, but other polyhedra may not have a plane through these points. For convex polyhedra, and more generally for polyhedra whose vertices are in convex position, this slice can be chosen as any plane separating the vertex from the other vertices. When the polyhedron has a center of symmetry, it is standard to choose this plane to be perpendicular to the line through the given vertex and the center; with this choice, the shape of the vertex figure is determined up to scaling. When the vertices of a polyhedron are not in convex position, there will not always be a plane separating each vertex from the rest. In this case, it is common instead to slice the polyhedron by a small sphere centered at the vertex. Again, this produces a shape for the vertex figure that is invariant up to scaling. All of these choices lead to vertex figures with the same combinatorial structure, for the polyhedra to which they can be applied, but they may give them different geometric shapes. Surface area and distances The surface area of a polyhedron is the sum of areas of its faces, for definitions of polyhedra for which the area of a face is well-defined. The geodesic distance between any two points on the surface of a polyhedron measures the length of the shortest curve that connects the two points, remaining within the surface. By Alexandrov's uniqueness theorem, every convex polyhedron is uniquely determined by the metric space of geodesic distances on its surface. However, non-convex polyhedra can have the same surface distances as each other, or the same as certain convex polyhedra. Volume Polyhedral solids have an associated quantity called volume that measures how much space they occupy. Simple families of solids may have simple formulas for their volumes; for example, the volumes of pyramids, prisms, and parallelepipeds can easily be expressed in terms of their edge lengths or other coordinates. (See Volume § Volume formulas for a list that includes many of these formulas.) Volumes of more complicated polyhedra may not have simple formulas. Volumes of such polyhedra may be computed by subdividing the polyhedron into smaller pieces (for example, by triangulation). For example, the volume of a regular polyhedron can be computed by dividing it into congruent pyramids, with each pyramid having a face of the polyhedron as its base and the centre of the polyhedron as its apex. In general, it can be derived from the divergence theorem that the volume of a polyhedral solid is given by where the sum is over faces of the polyhedron, is an arbitrary point on face , is the unit vector perpendicular to pointing outside the solid, and the multiplication dot is the dot product. In higher dimensions, volume computation may be challenging, in part because of the difficulty of listing the faces of a convex polyhedron specified only by its vertices, and there exist specialized algorithms to determine the volume in these cases. Dehn invariant In two dimensions, the Bolyai–Gerwien theorem asserts that any polygon may be transformed into any other polygon of the same area by cutting it up into finitely many polygonal pieces and rearranging them. The analogous question for polyhedra was the subject of Hilbert's third problem. Max Dehn solved this problem by showing that, unlike in the 2-D case, there exist polyhedra of the same volume that cannot be cut into smaller polyhedra and reassembled into each other. To prove this Dehn discovered another value associated with a polyhedron, the Dehn invariant, such that two polyhedra can only be dissected into each other when they have the same volume and the same Dehn invariant. It was later proven by Sydler that this is the only obstacle to dissection: every two Euclidean polyhedra with the same volumes and Dehn invariants can be cut up and reassembled into each other. The Dehn invariant is not a number, but a vector in an infinite-dimensional vector space, determined from the lengths and dihedral angles of a polyhedron's edges. Another of Hilbert's problems, Hilbert's 18th problem, concerns (among other things) polyhedra that tile space. Every such polyhedron must have Dehn invariant zero. The Dehn invariant has also been connected to flexible polyhedra by the strong bellows theorem, which states that the Dehn invariant of any flexible polyhedron remains invariant as it flexes. Symmetries Many of the most studied polyhedra are highly symmetrical, that is, their appearance is unchanged by some reflection or rotation of space. Each such symmetry may change the location of a given vertex, face, or edge, but the set of all vertices (likewise faces, edges) is unchanged. The collection of symmetries of a polyhedron is called its symmetry group. All the elements that can be superimposed on each other by symmetries are said to form a symmetry orbit. For example, all the faces of a cube lie in one orbit, while all the edges lie in another. If all the elements of a given dimension, say all the faces, lie in the same orbit, the figure is said to be transitive on that orbit. For example, a cube is face-transitive, while a truncated cube has two symmetry orbits of faces. The same abstract structure may support more or less symmetric geometric polyhedra. But where a polyhedral name is given, such as icosidodecahedron, the most symmetrical geometry is often implied. There are several types of highly symmetric polyhedron, classified by which kind of element – faces, edges, or vertices – belong to a single symmetry orbit: Regular: vertex-transitive, edge-transitive and face-transitive. (This implies that every face is the same regular polygon; it also implies that every vertex is regular.) Quasi-regular: vertex-transitive and edge-transitive (and hence has regular faces) but not face-transitive. A quasi-regular dual is face-transitive and edge-transitive (and hence every vertex is regular) but not vertex-transitive. Semi-regular: vertex-transitive but not edge-transitive, and every face is a regular polygon. (This is one of several definitions of the term, depending on author. Some definitions overlap with the quasi-regular class.) These polyhedra include the semiregular prisms and antiprisms. A semi-regular dual is face-transitive but not vertex-transitive, and every vertex is regular. Uniform: vertex-transitive and every face is a regular polygon, i.e., it is regular, quasi-regular or semi-regular. A uniform dual is face-transitive and has regular vertices, but is not necessarily vertex-transitive. Isogonal: vertex-transitive. Isotoxal: edge-transitive. Isohedral: face-transitive. Noble: face-transitive and vertex-transitive (but not necessarily edge-transitive). The regular polyhedra are also noble; they are the only noble uniform polyhedra. The duals of noble polyhedra are themselves noble. Some classes of polyhedra have only a single main axis of symmetry. These include the pyramids, bipyramids, trapezohedra, cupolae, as well as the semiregular prisms and antiprisms. Regular polyhedra Regular polyhedra are the most highly symmetrical. Altogether there are nine regular polyhedra: five convex and four star polyhedra. The five convex examples have been known since antiquity and are called the Platonic solids. These are the triangular pyramid or tetrahedron, cube, octahedron, dodecahedron and icosahedron: There are also four regular star polyhedra, known as the Kepler–Poinsot polyhedra after their discoverers. The dual of a regular polyhedron is also regular. Uniform polyhedra and their duals Uniform polyhedra are vertex-transitive and every face is a regular polygon. They may be subdivided into the regular, quasi-regular, or semi-regular, and may be convex or starry. The duals of the uniform polyhedra have irregular faces but are face-transitive, and every vertex figure is a regular polygon. A uniform polyhedron has the same symmetry orbits as its dual, with the faces and vertices simply swapped over. The duals of the convex Archimedean polyhedra are sometimes called the Catalan solids. The uniform polyhedra and their duals are traditionally classified according to their degree of symmetry, and whether they are convex or not. Isohedra An isohedron is a polyhedron with symmetries acting transitively on its faces. Their topology can be represented by a face configuration. All 5 Platonic solids and 13 Catalan solids are isohedra, as well as the infinite families of trapezohedra and bipyramids. Some definitions of isohedra allow geometric variations including concave and self-intersecting forms. Symmetry groups Many of the symmetries or point groups in three dimensions are named after polyhedra having the associated symmetry. These include: T – chiral tetrahedral symmetry; the rotation group for a regular tetrahedron; order 12. Td – full tetrahedral symmetry; the symmetry group for a regular tetrahedron; order 24. Th – pyritohedral symmetry; the symmetry of a pyritohedron; order 24. O – chiral octahedral symmetry;the rotation group of the cube and octahedron; order 24. Oh – full octahedral symmetry; the symmetry group of the cube and octahedron; order 48. I – chiral icosahedral symmetry; the rotation group of the icosahedron and the dodecahedron; order 60. Ih – full icosahedral symmetry; the symmetry group of the icosahedron and the dodecahedron; order 120. Cnv – n-fold pyramidal symmetry Dnh – n-fold prismatic symmetry Dnv – n-fold antiprismatic symmetry. Those with chiral symmetry do not have reflection symmetry and hence have two enantiomorphous forms which are reflections of each other. Examples include the snub cuboctahedron and snub icosidodecahedron. Other important families of polyhedra Zonohedra A zonohedron is a convex polyhedron in which every face is a polygon that is symmetric under rotations through 180°. Zonohedra can also be characterized as the Minkowski sums of line segments, and include several important space-filling polyhedra. Space-filling polyhedra A space-filling polyhedron packs with copies of itself to fill space. Such a close-packing or space-filling is often called a tessellation of space or a honeycomb. Space-filling polyhedra must have a Dehn invariant equal to zero. Some honeycombs involve more than one kind of polyhedron. Lattice polyhedra A convex polyhedron in which all vertices have integer coordinates is called a lattice polyhedron or integral polyhedron. The Ehrhart polynomial of a lattice polyhedron counts how many points with integer coordinates lie within a scaled copy of the polyhedron, as a function of the scale factor. The study of these polynomials lies at the intersection of combinatorics and commutative algebra. There is a far-reaching equivalence between lattice polyhedra and certain algebraic varieties called toric varieties. This was used by Stanley to prove the Dehn–Sommerville equations for simplicial polytopes. Flexible polyhedra It is possible for some polyhedra to change their overall shape, while keeping the shapes of their faces the same, by varying the angles of their edges. A polyhedron that can do this is called a flexible polyhedron. By Cauchy's rigidity theorem, flexible polyhedra must be non-convex. The volume of a flexible polyhedron must remain constant as it flexes; this result is known as the bellows theorem. Compounds A polyhedral compound is made of two or more polyhedra sharing a common centre. Symmetrical compounds often share the same vertices as other well-known polyhedra and may often also be formed by stellation. Some are listed in the list of Wenninger polyhedron models. Orthogonal polyhedra An orthogonal polyhedron is one all of whose edges are parallel to axes of a Cartesian coordinate system. This implies that all faces meet at right angles, but this condition is weaker: Jessen's icosahedron has faces meeting at right angles, but does not have axis-parallel edges. Aside from the rectangular cuboids, orthogonal polyhedra are nonconvex. They are the 3D analogs of 2D orthogonal polygons, also known as rectilinear polygons. Orthogonal polyhedra are used in computational geometry, where their constrained structure has enabled advances on problems unsolved for arbitrary polyhedra, for example, unfolding the surface of a polyhedron to a polygonal net. Polycubes are a special case of orthogonal polyhedra that can be decomposed into identical cubes, and are three-dimensional analogues of planar polyominoes. Embedded regular maps with planar faces Regular maps are flag transitive abstract 2-manifolds and they have been studied already in the nineteenth century. In some cases they have geometric realizations. An example is the Szilassi polyhedron, a toroidal polyhedron that realizes the Heawood map. In this case, the polyhedron is much less symmetric than the underlying map, but in some cases it is possible for self-crossing polyhedra to realize some or all of the symmetries of a regular map. Generalisations The name 'polyhedron' has come to be used for a variety of objects having similar structural properties to traditional polyhedra. Apeirohedra A classical polyhedral surface has a finite number of faces, joined in pairs along edges. The apeirohedra form a related class of objects with infinitely many faces. Examples of apeirohedra include: tilings or tessellations of the plane, and sponge-like structures called infinite skew polyhedra. Complex polyhedra There are objects called complex polyhedra, for which the underlying space is a complex Hilbert space rather than real Euclidean space. Precise definitions exist only for the regular complex polyhedra, whose symmetry groups are complex reflection groups. The complex polyhedra are mathematically more closely related to configurations than to real polyhedra. Curved polyhedra Some fields of study allow polyhedra to have curved faces and edges. Curved faces can allow digonal faces to exist with a positive area. Spherical polyhedra When the surface of a sphere is divided by finitely many great arcs (equivalently, by planes passing through the center of the sphere), the result is called a spherical polyhedron. Many convex polytopes having some degree of symmetry (for example, all the Platonic solids) can be projected onto the surface of a concentric sphere to produce a spherical polyhedron. However, the reverse process is not always possible; some spherical polyhedra (such as the hosohedra) have no flat-faced analogue. Curved spacefilling polyhedra If faces are allowed to be concave as well as convex, adjacent faces may be made to meet together with no gap. Some of these curved polyhedra can pack together to fill space. Two important types are: Bubbles in froths and foams, such as Weaire-Phelan bubbles. Forms used in architecture. Ideal polyhedra Convex polyhedra can be defined in three-dimensional hyperbolic space in the same way as in Euclidean space, as the convex hulls of finite sets of points. However, in hyperbolic space, it is also possible to consider ideal points as well as the points that lie within the space. An ideal polyhedron is the convex hull of a finite set of ideal points. Its faces are ideal polygons, but its edges are defined by entire hyperbolic lines rather than line segments, and its vertices (the ideal points of which it is the convex hull) do not lie within the hyperbolic space. Skeletons and polyhedra as graphs By forgetting the face structure, any polyhedron gives rise to a graph, called its skeleton, with corresponding vertices and edges. Such figures have a long history: Leonardo da Vinci devised frame models of the regular solids, which he drew for Pacioli's book Divina Proportione, and similar wire-frame polyhedra appear in M.C. Escher's print Stars. One highlight of this approach is Steinitz's theorem, which gives a purely graph-theoretic characterization of the skeletons of convex polyhedra: it states that the skeleton of every convex polyhedron is a 3-connected planar graph, and every 3-connected planar graph is the skeleton of some convex polyhedron. An early idea of abstract polyhedra was developed in Branko Grünbaum's study of "hollow-faced polyhedra." Grünbaum defined faces to be cyclically ordered sets of vertices, and allowed them to be skew as well as planar. The graph perspective allows one to apply graph terminology and properties to polyhedra. For example, the tetrahedron and Császár polyhedron are the only known polyhedra whose skeletons are complete graphs (K4), and various symmetry restrictions on polyhedra give rise to skeletons that are symmetric graphs. Alternative usages From the latter half of the twentieth century, various mathematical constructs have been found to have properties also present in traditional polyhedra. Rather than confining the term "polyhedron" to describe a three-dimensional polytope, it has been adopted to describe various related but distinct kinds of structure. Higher-dimensional polyhedra A polyhedron has been defined as a set of points in real affine (or Euclidean) space of any dimension n that has flat sides. It may alternatively be defined as the intersection of finitely many half-spaces. Unlike a conventional polyhedron, it may be bounded or unbounded. In this meaning, a polytope is a bounded polyhedron. Analytically, such a convex polyhedron is expressed as the solution set for a system of linear inequalities. Defining polyhedra in this way provides a geometric perspective for problems in linear programming. Topological polyhedra A topological polytope is a topological space given along with a specific decomposition into shapes that are topologically equivalent to convex polytopes and that are attached to each other in a regular way. Such a figure is called simplicial if each of its regions is a simplex, i.e. in an n-dimensional space each region has n+1 vertices. The dual of a simplicial polytope is called simple. Similarly, a widely studied class of polytopes (polyhedra) is that of cubical polyhedra, when the basic building block is an n-dimensional cube. Abstract polyhedra An abstract polytope is a partially ordered set (poset) of elements whose partial ordering obeys certain rules of incidence (connectivity) and ranking. The elements of the set correspond to the vertices, edges, faces and so on of the polytope: vertices have rank 0, edges rank 1, etc. with the partially ordered ranking corresponding to the dimensionality of the geometric elements. The empty set, required by set theory, has a rank of −1 and is sometimes said to correspond to the null polytope. An abstract polyhedron is an abstract polytope having the following ranking: rank 3: The maximal element, sometimes identified with the body. rank 2: The polygonal faces. rank 1: The edges. rank 0: the vertices. rank −1: The empty set, sometimes identified with the . Any geometric polyhedron is then said to be a "realization" in real space of the abstract poset as described above. History Before the Greeks Polyhedra appeared in early architectural forms such as cubes and cuboids, with the earliest four-sided Egyptian pyramids dating from the 27th century BC. The Moscow Mathematical Papyrus from approximately 1800–1650 BC includes an early written study of polyhedra and their volumes (specifically, the volume of a frustum). The mathematics of the Old Babylonian Empire, from roughly the same time period as the Moscow Papyrus, also included calculations of the volumes of cuboids (and of non-polyhedral cylinders), and calculations of the height of such a shape needed to attain a given volume. The Etruscans preceded the Greeks in their awareness of at least some of the regular polyhedra, as evidenced by the discovery of an Etruscan dodecahedron made of soapstone on Monte Loffa. Its faces were marked with different designs, suggesting to some scholars that it may have been used as a gaming die. Ancient Greece Ancient Greek mathematicians discovered and studied the convex regular polyhedra, which came to be known as the Platonic solids. Their first written description is in the Timaeus of Plato (circa 360 BC), which associates four of them with the four elements and the fifth to the overall shape of the universe. A more mathematical treatment of these five polyhedra was written soon after in the Elements of Euclid. An early commentator on Euclid (possibly Geminus) writes that the attribution of these shapes to Plato is incorrect: Pythagoras knew the tetrahedron, cube, and dodecahedron, and Theaetetus (circa 417 BC) discovered the other two, the octahedron and icosahedron. Later, Archimedes expanded his study to the convex uniform polyhedra which now bear his name. His original work is lost and his solids come down to us through Pappus. Ancient China Both cubical dice and 14-sided dice in the shape of a truncated octahedron in China have been dated back as early as the Warring States period. By 236 AD, Liu Hui was describing the dissection of the cube into its characteristic tetrahedron (orthoscheme) and related solids, using assemblages of these solids as the basis for calculating volumes of earth to be moved during engineering excavations. Medieval Islam After the end of the Classical era, scholars in the Islamic civilisation continued to take the Greek knowledge forward (see Mathematics in medieval Islam). The 9th century scholar Thabit ibn Qurra included the calculation of volumes in his studies, and wrote a work on the cuboctahedron. Then in the 10th century Abu'l Wafa described the convex regular and quasiregular spherical polyhedra. Renaissance As with other areas of Greek thought maintained and enhanced by Islamic scholars, Western interest in polyhedra revived during the Italian Renaissance. Artists constructed skeletal polyhedra, depicting them from life as a part of their investigations into perspective. Toroidal polyhedra, made of wood and used to support headgear, became a common exercise in perspective drawing, and were depicted in marquetry panels of the period as a symbol of geometry. Piero della Francesca wrote about constructing perspective views of polyhedra, and rediscovered many of the Archimedean solids. Leonardo da Vinci illustrated skeletal models of several polyhedra for a book by Luca Pacioli, with text largely plagiarized from della Francesca. Polyhedral nets make an appearance in the work of Albrecht Dürer. Several works from this time investigate star polyhedra, and other elaborations of the basic Platonic forms. A marble tarsia in the floor of St. Mark's Basilica, Venice, designed by Paolo Uccello, depicts a stellated dodecahedron. As the Renaissance spread beyond Italy, later artists such as Wenzel Jamnitzer, Dürer and others also depicted polyhedra of increasing complexity, many of them novel, in imaginative etchings. Johannes Kepler (1571–1630) used star polygons, typically pentagrams, to build star polyhedra. Some of these figures may have been discovered before Kepler's time, but he was the first to recognize that they could be considered "regular" if one removed the restriction that regular polyhedra must be convex. In the same period, Euler's polyhedral formula, a linear equation relating the numbers of vertices, edges, and faces of a polyhedron, was stated for the Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. 17th–19th centuries René Descartes, in around 1630, wrote his book De solidorum elementis studying convex polyhedra as a general concept, not limited to the Platonic solids and their elaborations. The work was lost, and not rediscovered until the 19th century. One of its contributions was Descartes' theorem on total angular defect, which is closely related to Euler's polyhedral formula. Leonhard Euler, for whom the formula is named, introduced it in 1758 for convex polyhedra more generally, albeit with an incorrect proof. Euler's work (together with his earlier solution to the puzzle of the Seven Bridges of Königsberg) became the foundation of the new field of topology. The core concepts of this field, including generalizations of the polyhedral formula, were developed in the late nineteenth century by Henri Poincaré, Enrico Betti, Bernhard Riemann, and others. In the early 19th century, Louis Poinsot extended Kepler's work, and discovered the remaining two regular star polyhedra. Soon after, Augustin-Louis Cauchy proved Poinsot's list complete, subject to an unstated assumption that the sequence of vertices and edges of each polygonal side cannot admit repetitions (an assumption that had been considered but rejected in the earlier work of A. F. L. Meister). They became known as the Kepler–Poinsot polyhedra, and their usual names were given by Arthur Cayley. Meanwhile, the discovery of higher dimensions in the early 19th century led Ludwig Schläfli by 1853 to the idea of higher-dimensional polytopes. Additionally, in the late 19th century, Russian crystallographer Evgraf Fedorov completed the classification of parallelohedra, convex polyhedra that tile space by translations. 20th–21st centuries Mathematics in the 20th century dawned with Hilbert's problems, one of which, Hilbert's third problem, concerned polyhedra and their dissections. It was quickly solved by Hilbert's student Max Dehn, introducing the Dehn invariant of polyhedra. Steinitz's theorem, published by Ernst Steinitz in 1992, characterized the graphs of convex polyhedra, bringing modern ideas from graph theory and combinatorics into the study of polyhedra. The Kepler–Poinsot polyhedra may be constructed from the Platonic solids by a process called stellation. Most stellations are not regular. The study of stellations of the Platonic solids was given a big push by H.S.M. Coxeter and others in 1938, with the now famous paper The 59 icosahedra. Coxeter's analysis signalled a rebirth of interest in geometry. Coxeter himself went on to enumerate the star uniform polyhedra for the first time, to treat tilings of the plane as polyhedra, to discover the regular skew polyhedra and to develop the theory of complex polyhedra first discovered by Shephard in 1952, as well as making fundamental contributions to many other areas of geometry. In the second part of the twentieth century, both Branko Grünbaum and Imre Lakatos pointed out the tendency among mathematicians to define a "polyhedron" in different and sometimes incompatible ways to suit the needs of the moment. In a series of papers, Grünbaum broadened the accepted definition of a polyhedron, discovering many new regular polyhedra. At the close of the 20th century these latter ideas merged with other work on incidence complexes to create the modern idea of an abstract polyhedron (as an abstract 3-polytope), notably presented by McMullen and Schulte. Polyhedra make a frequent appearance in modern computational geometry, computer graphics, and geometric design with topics including the reconstruction of polyhedral surfaces or surface meshes from scattered data points, geodesics on polyhedral surfaces, visibility and illumination in polyhedral scenes, polycubes and other non-convex polyhedra with axis-parallel sides, algorithmic forms of Steinitz's theorem, and the still-unsolved problem of the existence of polyhedral nets for convex polyhedra. In nature For natural occurrences of regular polyhedra, see . Irregular polyhedra appear in nature as crystals.
Mathematics
Geometry
null
23496
https://en.wikipedia.org/wiki/Pregnancy%20%28mammals%29
Pregnancy (mammals)
In mammals, pregnancy is the period of reproduction during which a female carries one or more live offspring from implantation in the uterus through gestation. It begins when a fertilized zygote implants in the female's uterus, and ends once it leaves the uterus. Fertilization and implantation During copulation, the male inseminates the female. The spermatozoon fertilizes an ovum or various ova in the uterus or oviducts, and this results in one or multiple zygotes. Sometimes, a zygote can be created by humans outside of the animal's body in the artificial process of in-vitro fertilization. After fertilization, the newly formed zygote then begins to divide through mitosis, forming an embryo, which implants in the female's endometrium. At this time, the embryo usually consists of 50 cells. Development After implantation A blastocele is a small cavity on the center of the embryo, and the developing embryonary cells will grow around it. Then, a flat layer cell forms on the exterior of this cavity, and the zona pellucida, the blastocyst's barrier, remains the same size as before. Cells grow increasingly smaller to fit in. This new structure with a cavity in the center and the developing cells around it is known as a blastocyst. The presence of the blastocyst means that two types of cells are forming, an inner-cell mass growing on the interior of the blastocele and cells growing on the exterior of it. In 24 to 48 hours, the zona pellucida breaches. The cells on the exterior of the blastocyst begin excreting an enzyme which erodes epithelial uterine lining and creates a site for implantation. Placental circulation system The cells surrounding the blastocyst now destroy cells in the uterine lining, forming small pools of blood, which in turn stimulate the production of capillaries. This is the first stage in the growth of the placenta. The inner cell mass of the blastocyst divides rapidly, forming two layers. The top layer becomes the embryo, and cells from there occupy the amniotic cavity. At the same time, the bottom layer forms a small sac (if the cells begin developing in an abnormal position, an ectopic gestation may also occur at this point). Several days later, chorionic villi in the forming placenta anchor the implantation site to the uterus. A system of blood and blood vessels now develops at the point of the newly forming placenta, growing near the implantation site. The small sac inside the blastocyst begins producing red blood cells. For the next 24 hours, connective tissue develops between the developing placenta and the growing embryo. This later develops into the umbilical cord. Cellular differentiation Following this, a narrow line of cells appears on the surface on the embryo. Its growth makes the embryo undergo gastrulation, in which the three primary tissue layers of the fetus, the ectoderm, mesoderm, and endoderm, develop. The narrow line of cells begin to form the endoderm and mesoderm. The ectoderm begins to grow rapidly as a result of chemicals being produced by the mesoderm. These three layers give rise to all the various types of tissue in the body. The endoderm later forms the lining of the tongue, digestive tract, lungs, bladder and several glands. The mesoderm forms muscle, bone, and lymph tissue, as well as the interior of the lungs, heart, and reproductive and excretory systems. It also gives rise to the spleen, and produces blood cells. The ectoderm forms the skin, nails, hair, cornea, lining of the internal and external ear, nose, sinuses, mouth, anus, teeth, pituitary gland, mammary glands, eyes, and all parts of the nervous system. Approximately 18 days after fertilization, the embryo has divided to form much of the tissue it will need. It is shaped like a pear, where the head region is larger than the tail. The embryo's nervous system is one of the first organic systems to grow. It begins growing in a concave area known as the neural groove. The blood system continues to grow networks which allow the blood to flow around the embryo. Blood cells are already being produced and are flowing through these developing networks. Secondary blood vessels also begin to develop around the placenta, to supply it with more nutrients. Blood cells begin to form on the sac in the center of the embryo, as well as cells which begin to differentiate into blood vessels. Endocardial cells begin to form the myocardium. At about 24 days past fertilization, there is a primitive S-shaped tubule heart which begins beating. The flow of fluids throughout the embryo begins at this stage. Gestation periods For mammals, the gestation period is the time in which a fetus develops, beginning with fertilization and ending at birth. The duration of this period varies between species. For most species, the amount a fetus grows before birth determines the length of the gestation period. Smaller species normally have a shorter gestation period than larger animals. For example, a cat's gestation normally takes 58–65 days while an elephant's takes nearly 2 years (21 months). However, growth does not necessarily determine the length of gestation for all species, especially for those with a breeding season. Species that use a breeding season usually give birth during a specific time of year when food is available. Various other factors can come into play in determining the duration of gestation. For humans, male fetuses normally gestate several days longer than females and multiple pregnancies gestate for a shorter period. Ethnicity in humans is also a factor that may lengthen or shorten gestation. In dogs, there is a positive correlation between a longer gestation time and fewer members of the litter. The duration of gestation is usually longer in placental mammals than in marsupials.
Biology and health sciences
Animal reproduction
Biology
23501
https://en.wikipedia.org/wiki/Potato
Potato
The potato () is a starchy tuberous vegetable native to the Americas that is consumed as a staple food in many parts of the world. Potatoes are underground tubers of the plant Solanum tuberosum, a perennial in the nightshade family Solanaceae. Wild potato species can be found from the southern United States to southern Chile. Genetic studies show that the cultivated potato has a single origin, in the area of present-day southern Peru and extreme northwestern Bolivia. Potatoes were domesticated there about 7,000–10,000 years ago from a species in the S. brevicaule complex. Many varieties of the potato are cultivated in the Andes region of South America, where the species is indigenous. The Spanish introduced potatoes to Europe in the second half of the 16th century from the Americas. They are a staple food in many parts of the world and an integral part of much of the world's food supply. Following millennia of selective breeding, there are now over 5,000 different varieties of potatoes. The potato remains an essential crop in Europe, especially Northern and Eastern Europe, where per capita production is still the highest in the world, while the most rapid expansion in production during the 21st century was in southern and eastern Asia, with China and India leading the world production as of 2021. Like the tomato and the nightshades, the potato is in the genus Solanum; the aerial parts of the potato contain the toxin solanine. Normal potato tubers that have been grown and stored properly produce glycoalkaloids in negligible amounts, but, if sprouts and potato skins are exposed to light, tubers can become toxic. Etymology The English word "potato" comes from Spanish , in turn from Taíno , which means "sweet potato", not the plant now known as simply "potato". The name "spud" for a potato is from the 15th century spudde, a short knife or dagger, probably related to Danish spyd, "spear". From around 1840, the name transferred to the tuber itself. At least seven languages—Afrikaans, Dutch, Low Saxon, French, (West) Frisian, Hebrew, Persian and some variants of German—use a term for "potato" that means "earth apple" or "ground apple". Description Potato plants are herbaceous perennials that grow up to high. The stems are hairy. The leaves have roughly four pairs of leaflets. The flowers range from white or pink to blue or purple; they are yellow at the centre, and are insect-pollinated. The plant develops tubers to store nutrients. These are not roots but stems that form from thickened rhizomes at the tips of long thin stolons. On the surface of the tubers there are "eyes," which act as sinks to protect the vegetative buds from which the stems originate. The "eyes" are arranged in helical form. In addition, the tubers have small holes that allow breathing, called lenticels. The lenticels are circular and their number varies depending on the size of the tuber and environmental conditions. Tubers form in response to decreasing day length, although this tendency has been minimized in commercial varieties. After flowering, potato plants produce small green fruits that resemble green cherry tomatoes, each containing about 300 very small seeds. Phylogeny Like the tomato, potatoes belong to the genus Solanum, which is a member of the nightshade family, the Solanaceae. That is a diverse family of flowering plants, often poisonous, that includes the mandrake (Mandragora), deadly nightshade (Atropa), and tobacco (Nicotiana), as shown in the outline phylogenetic tree (many branches omitted). The most commonly cultivated potato is S. tuberosum; there are several other species. The major species grown worldwide is S. tuberosum (a tetraploid with 48 chromosomes), and modern varieties of this species are the most widely cultivated. There are also four diploid species (with 24 chromosomes): S. stenotomum, S. phureja, S. goniocalyx, and S. ajanhuiri. There are two triploid species (with 36 chromosomes): S. chaucha and S. juzepczukii. There is one pentaploid cultivated species (with 60 chromosomes): S. curtilobum. There are two major subspecies of S. tuberosum. The Andean potato, S. tuberosum andigena, is adapted to the short-day conditions prevalent in the mountainous equatorial and tropical regions where it originated. The Chilean potato S. tuberosum tuberosum, native to the Chiloé Archipelago, is in contrast adapted to the long-day conditions prevalent in the higher latitude region of southern Chile. History Domestication Wild potato species occur from the southern United States to southern Chile. The potato was first domesticated in southern Peru and northwestern Bolivia by pre-Columbian farmers, around Lake Titicaca. Potatoes were domesticated there about 7,000–10,000 years ago from a species in the S. brevicaule complex. The earliest archaeologically verified potato tuber remains have been found at the coastal site of Ancon (central Peru), dating to 2500 BC. The most widely cultivated variety, Solanum tuberosum tuberosum, is indigenous to the Chiloé Archipelago, and has been cultivated by the local indigenous people since before the Spanish conquest. Spread Following the Spanish conquest of the Inca Empire, the Spanish introduced the potato to Europe in the second half of the 16th century as part of the Columbian exchange. The staple was subsequently conveyed by European mariners (possibly including the Russian-American Company) to territories and ports throughout the world, especially their colonies. European and colonial farmers were slow to adopt farming potatoes. However, after 1750, they became an important food staple and field crop and played a major role in the European 19th century population boom. According to conservative estimates, the introduction of the potato was responsible for a quarter of the growth in Old World population and urbanization between 1700 and 1900. However, lack of genetic diversity, due to the very limited number of varieties initially introduced, left the crop vulnerable to disease. In 1845, a plant disease known as late blight, caused by the fungus-like oomycete Phytophthora infestans, spread rapidly through the poorer communities of western Ireland as well as parts of the Scottish Highlands, resulting in the crop failures that led to the Great Irish Famine. The International Potato Center, based in Lima, Peru, holds 4,870 types of potato germplasm, most of which are traditional landrace cultivars. In 2009, a draft sequence of the potato genome was made, containing 12 chromosomes and 860 million base pairs, making it a medium-sized plant genome. It had been thought that most potato cultivars derived from a single origin in southern Peru and extreme Northwestern Bolivia, from a species in the S. brevicaule complex. DNA analysis however shows that more than 99% of all current varieties of potatoes are direct descendants of a subspecies that once grew in the lowlands of south-central Chile. Most modern potatoes grown in North America arrived through European settlement and not independently from the South American sources. At least one wild potato species, S. fendleri, occurs in North America; it is used in breeding for resistance to a nematode species that attacks cultivated potatoes. A secondary center of genetic variability of the potato is Mexico, where important wild species that have been used extensively in modern breeding are found, such as the hexaploid S. demissum, used as a source of resistance to the devastating late blight disease (Phytophthora infestans). Another relative native to this region, Solanum bulbocastanum, has been used to genetically engineer the potato to resist potato blight. Many such wild relatives are useful for breeding resistance to P. infestans. Little of the diversity found in Solanum ancestral and wild relatives is found outside the original South American range. This makes these South American species highly valuable in breeding. The importance of the potato to humanity is recognised in the United Nations International Day of Potato, to be celebrated on 30 May each year, starting in 2024. Breeding Potatoes, both S. tuberosum and most of its wild relatives, are self-incompatible: they bear no useful fruit when self-pollinated. This trait is problematic for crop breeding, as all sexually-produced plants must be hybrids. The gene responsible for self-incompatibility, as well as mutations to disable it, are now known. Self-compatibility has successfully been introduced both to diploid potatoes (including a special line of S. tuberosum) by CRISPR-Cas9. Plants having a 'Sli' gene produce pollen which is compatible to its own parent and plants with similar S genes. This gene was cloned by Wageningen University and Solynta in 2021, which would allow for faster and more focused breeding. Diploid hybrid potato breeding is a recent area of potato genetics supported by the finding that simultaneous homozygosity and fixation of donor alleles is possible. Wild potato species useful for breeding blight resistance include Solanum desmissum and S. stoloniferum, among others. Varieties There are some 5,000 potato varieties worldwide, 3,000 of them in the Andes alone — mainly in Peru, Bolivia, Ecuador, Chile, and Colombia. Over 100 cultivars might be found in a single valley, and a dozen or more might be maintained by a single agricultural household. The European Cultivated Potato Database is an online collaborative database of potato variety descriptions updated and maintained by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks—which is run by the International Plant Genetic Resources Institute. Around 80 varieties are commercially available in the UK. For culinary purposes, varieties are often differentiated by their waxiness: floury or mealy baking potatoes have more starch (20–22%) than waxy boiling potatoes (16–18%). The distinction may also arise from variation in the comparative ratio of two different potato starch compounds: amylose and amylopectin. Amylose, a long-chain molecule, diffuses from the starch granule when cooked in water, and lends itself to dishes where the potato is mashed. Varieties that contain a slightly higher amylopectin content, which is a highly branched molecule, help the potato retain its shape after being boiled in water. Potatoes that are good for making potato chips or potato crisps are sometimes called "chipping potatoes", which means they meet the basic requirements of similar varietal characteristics, being firm, fairly clean, and fairly well-shaped. Immature potatoes may be sold fresh from the field as "" or "" potatoes and are particularly valued for their taste. They are typically small in size and tender, with a loose skin, and flesh containing a lower level of starch than other potatoes. In the United States they are generally either a Yukon Gold potato or a red potato, called gold creamers or red creamers respectively. In the UK, the Jersey Royal is a famous type of new potato. Dozens of potato cultivars have been selectively bred specifically for their skin or flesh color, including gold, red, and blue varieties. These contain varying amounts of phytochemicals, including carotenoids for gold/yellow or polyphenols for red or blue cultivars. Carotenoid compounds include provitamin A alpha-carotene and beta-carotene, which are converted to the essential nutrient, vitamin A, during digestion. Anthocyanins mainly responsible for red or blue pigmentation in potato cultivars do not have nutritional significance, but are used for visual variety and consumer appeal. In 2010, potatoes were bioengineered specifically for these pigmentation traits. Genetic engineering Genetic research has produced several genetically modified varieties. 'New Leaf', owned by Monsanto Company, incorporates genes from Bacillus thuringiensis (source of most Bt toxins in transcrop use), which confers resistance to the Colorado potato beetle; 'New Leaf Plus' and 'New Leaf Y', approved by US regulatory agencies during the 1990s, also include resistance to viruses. McDonald's, Burger King, Frito-Lay, and Procter & Gamble announced they would not use genetically modified potatoes, and Monsanto published its intent to discontinue the line in March 2001. Potato starch contains two types of glucan, amylose and amylopectin, the latter of which is most industrially useful. Waxy potato varieties produce waxy potato starch, which is almost entirely amylopectin, with little or no amylose. BASF developed the 'Amflora' potato, which was modified to express antisense RNA to inactivate the gene for granule bound starch synthase, an enzyme which catalyzes the formation of amylose. 'Amflora' potatoes therefore produce starch consisting almost entirely of amylopectin, and are thus more useful for the starch industry. In 2010, the European Commission cleared the way for 'Amflora' to be grown in the European Union for industrial purposes only—not for food. Nevertheless, under EU rules, individual countries have the right to decide whether they will allow this potato to be grown on their territory. Commercial planting of 'Amflora' was expected in the Czech Republic and Germany in the spring of 2010, and Sweden and the Netherlands in subsequent years. The 'Fortuna' GM potato variety developed by BASF was made resistant to late blight by introgressing two resistance genes, and , from S. bulbocastanum, a wild potato native to Mexico. is a nucleotide-binding leucine-rich repeat (NB-LRR/NLR), an R-gene-produced immunoreceptor. In October 2011, BASF requested cultivation and marketing approval as a feed and food from the EFSA. In 2012, GMO development in Europe was stopped by BASF. In November 2014, the United States Department of Agriculture (USDA) approved a genetically modified potato developed by Simplot, which contains genetic modifications that prevent bruising and produce less acrylamide when fried than conventional potatoes; the modifications do not cause new proteins to be made, but rather prevent proteins from being made via RNA interference. Genetically modified varieties have met public resistance in the U.S. and in the European Union. Cultivation Seed potatoes Potatoes are generally grown from "seed potatoes", tubers specifically grown to be free from disease and to provide consistent and healthy plants. To be disease free, the areas where seed potatoes are grown are selected with care. In the US, this restricts production of seed potatoes to only 15 states out of all 50 states where potatoes are grown. These locations are selected for their cold, hard winters that kill pests and summers with long sunshine hours for optimum growth. In the UK, most seed potatoes originate in Scotland, in areas where westerly winds reduce aphid attacks and the spread of potato virus pathogens. Phases of growth Potato growth can be divided into five phases. During the first phase, sprouts emerge from the seed potatoes and root growth begins. During the second, photosynthesis begins as the plant develops leaves and branches above-ground and stolons develop from lower leaf axils on the below-ground stem. In the third phase the tips of the stolons swell, forming new tubers, and the shoots continue to grow, with flowers typically developing soon after. Tuber bulking occurs during the fourth phase, when the plant begins investing the majority of its resources in its newly formed tubers. At this phase, several factors are critical to a good yield: optimal soil moisture and temperature, soil nutrient availability and balance, and resistance to pest attacks. The fifth phase is the maturation of the tubers: the leaves and stems senesce and the tuber skins harden. New tubers may start growing at the surface of the soil. Since exposure to light leads to an undesirable greening of the skins and the development of solanine as a protection from the sun's rays, growers cover surface tubers. Commercial growers cover them by piling additional soil around the base of the plant as it grows (called "hilling" up, or in British English "earthing up"). An alternative method, used by home gardeners and smaller-scale growers, involves covering the growing area with mulches such as straw or plastic sheets. At farm scale, potatoes require a well-drained neutral or mildly acidic soil (pH 6 or 7) such as a sandy loam. The soil is prepared using deep tillage, for example with a chisel plow or ripper. In areas where irrigation is needed, the field is leveled using a landplane so that water can be supplied evenly. Manure can be added after initial irrigation; the soil is then broken up with a disc harrow. The potatoes are planted using a potato planter machine in rows apart. At garden scale, potatoes are planted in trenches or individual holes some deep in soil, preferably with additional organic matter such as garden compost or manure. Alternatively, they can be planted in containers or bags filled with a free-draining compost. Potatoes are sensitive to heavy frosts, which damage them in the ground or when stored. Pests and diseases The historically significant Phytophthora infestans, the cause of late blight, remains an ongoing problem in Europe and the United States. Other potato diseases include Rhizoctonia, Sclerotinia, Pectobacterium carotovorum (black leg), powdery mildew, powdery scab and leafroll virus. Insects that commonly transmit potato diseases or damage the plants include the Colorado potato beetle, the potato tuber moth, the green peach aphid (Myzus persicae), the potato aphid, Tuta absoluta, beet leafhoppers, thrips, and mites. The Colorado potato beetle is considered the most important insect defoliator of potatoes, devastating entire crops. The potato cyst nematode is a microscopic worm that feeds on the roots, thus causing the potato plants to wilt. Since its eggs can survive in the soil for several years, crop rotation is recommended. Harvest On a small scale, potatoes can be harvested using a hoe or spade, or simply by hand. Commercial harvesting is done with large potato harvesters, which scoop up the plant and surrounding earth. This is transported up an apron chain consisting of steel links several feet wide, which separates some of the earth. The chain deposits into an area where further separation occurs. The most complex designs use vine choppers and shakers, along with a blower system to separate the potatoes from the plant. The result is then usually run past workers who continue to sort out plant material, stones, and rotten potatoes before the potatoes are continuously delivered to a wagon or truck. Further inspection and separation occurs when the potatoes are unloaded from the field vehicles and put into storage. Potatoes are usually cured after harvest to improve skin-set. Skin-set is the process by which the skin of the potato becomes resistant to skinning damage. Potato tubers may be susceptible to skinning at harvest and suffer skinning damage during harvest and handling operations. Curing allows the skin to fully set and any wounds to heal. Wound-healing prevents infection and water-loss from the tubers during storage. Curing is normally done at relatively warm temperatures () with high humidity and good gas-exchange if at all possible. Storage Storage facilities need to be carefully designed to keep the potatoes alive and slow the natural process of sprouting which involves the breakdown of starch. It is crucial that the storage area be dark, ventilated well, and, for long-term storage, maintained at temperatures near . For short-term storage, temperatures of about are preferred. Temperatures below convert the starch in potatoes into sugar, which alters their taste and cooking qualities and leads to higher acrylamide levels in the cooked product, especially in deep-fried dishes. The discovery of acrylamides in starchy foods in 2002 has caused concern, but it is not likely that the acrylamides in food, even if it is somewhat burnt, causes cancer in humans. Chemicals are used to suppress sprouting of tubers during storage. Chlorpropham is the main chemical used, but it has been banned in the EU over toxicity concerns. Alternatives include ethylene, spearmint and orange oils, and 1,4-dimethylnaphthalene. Under optimum conditions in commercial warehouses, potatoes can be stored for up to 10–12 months. The commercial storage and retrieval of potatoes involves several phases: first drying surface moisture; wound healing at 85% to 95% relative humidity and temperatures below ; a staged cooling phase; a holding phase; and a reconditioning phase, during which the tubers are slowly warmed. Mechanical ventilation is used at various points during the process to prevent condensation and the accumulation of carbon dioxide. Production In 2021, world production of potatoes was , led by China with 25% of the total. Other major producers were India and Ukraine (table). The world dedicated to potato cultivation in 2010; the world average yield was . The United States was the most productive country, with a nationwide average yield of . New Zealand farmers have demonstrated some of the best commercial yields in the world, ranging between 60 and 80 tonnes per hectare, some reporting yields of 88 tonnes of potatoes per hectare. There is a big gap among various countries between high and low yields, even with the same variety of potato. Average potato yields in developed economies ranges between . China and India accounted for over a third of world's production in 2010, and had yields of respectively. The yield gap between farms in developing economies and developed economies represents an opportunity loss of over of potato, or an amount greater than 2010 world potato production. Potato crop yields are determined by factors such as the crop breed, seed age and quality, crop management practices and the plant environment. Improvements in one or more of these yield determinants, and a closure of the yield gap, could be a major boost to food supply and farmer incomes in the developing world. The food energy yield of potatoes—about —is higher than that of maize (), rice (), wheat (), or soybeans (). Impact of climate change on production Climate change is predicted to have significant effects on global potato production. Like many crops, potatoes are likely to be affected by changes in atmospheric carbon dioxide, temperature and precipitation, as well as interactions between these factors. As well as affecting potatoes directly, climate change will also affect the distributions and populations of many potato diseases and pests. While the potato is less important than maize, rice, wheat and soybeans, which are collectively responsible for around two-thirds of all calories consumed by humans (both directly and indirectly as animal feed), it still is one of the world's most important food crops. Altogether, one 2003 estimate suggests that future (2040–2069) worldwide potato yield would be 18-32% lower than it was at the time, driven by declines in hotter areas like Sub-Saharan Africa, unless farmers and potato cultivars can adapt to the new environment. Potato plants and crop yields are predicted to benefit from the CO2 fertilization effect, which would increase photosynthetic rates and therefore growth, reduce water consumption through lower transpiration from stomata and increase starch content in the edible tubers. However, potatoes are more sensitive to soil water deficits than some other staple crops like wheat. In the UK, the amount of arable land suitable for rainfed potato production is predicted to decrease by at least 75%. These changes are likely to lead to increased demand for irrigation water, particularly during the potato growing season. Potatoes grow best under temperate conditions. Temperatures above have negative effects on potato crops, from physiological damage such as brown spots on tubers, to slower growth, premature sprouting, and lower starch content. These effects reduce crop yield, affecting both the number and the weight of tubers. As a result, areas where current temperatures are near the limits of potatoes' temperature range (e.g. much of sub-Saharan Africa) will likely suffer large reductions in potato crop yields in the future. On the other hand, low temperatures reduce potato growth and present risk of frost damage. Changes in pests and diseases Climate change is predicted to affect many potato pests and diseases. These include: Insect pests such as the potato tuber moth and Colorado potato beetle, which are predicted to spread into areas currently too cold for them. Aphids which act as vectors for many potato viruses and will spread under increased temperatures. Pathogens causing potato blackleg disease (e.g. Dickeya) grow and reproduce faster at higher temperatures. Bacterial infections such as Ralstonia solanacearum will benefit from higher temperatures and spread more easily through flash flooding. Late blight benefits from higher temperatures and wetter conditions. Late blight is predicted to become a greater threat in some areas (e.g. in Finland) and become a lesser threat in others (e.g. in the United Kingdom). Adaptation strategies Potato production is expected to decline in many areas due to hotter temperatures and decreased water availability. Conversely, production is predicted to become possible in high altitude and latitude areas where it has been limited by frost damage, such as in Canada and Russia. This will shift potato production to cooler areas, mitigating much of the projected decline in yield. However, this may trigger competition for land between potato crops and other land uses, mostly due to changes in water and temperature regimes. The other approach is through the development of varieties or cultivars which would be more adapted to altered conditions. This can be done through 'traditional' plant breeding techniques and genetic modification. These techniques allow for the selection of specific traits as a new cultivar is developed. Certain traits, such as heat stress tolerance, drought tolerance, fast growth/early maturation and disease resistance, may play an important role in creating new cultivars able to maintain yields under stressors induced by climate change. For instance, developing cultivars with greater heat stress tolerance would be critical for maintaining yields in countries with potato production areas near current cultivars' maximum temperature limits (e.g. Sub-Saharan Africa, India). Superior drought resistance can be achieved through improved water use efficiency (amount of food produced per amount of water used) or the ability to recover from short drought periods and still produce acceptable yields. Further, selecting for deeper root systems may reduce the need for irrigation. Nutrition In a reference amount of , a boiled potato with skin supplies 87 calories and is 77% water, 20% carbohydrates (including 2% dietary fiber in the skin and flesh), 2% protein, and contains negligible fat (table). The protein content is comparable to other starchy vegetable staples, as well as grains. Boiled potatoes are a rich source (20% or more of the Daily Value, DV) of vitamin B6 (23% DV), and contain a moderate amount of vitamin C (16% DV) and B vitamins, such as thiamine, niacin, and pantothenic acid (10% DV each). Boiled potatoes do not supply significant amounts of dietary minerals (table). The potato is rarely eaten raw because raw potato starch is poorly digested by humans. Depending on the cultivar and preparation method, potatoes can have a high glycemic index (GI) and so are often excluded from the diets of individuals trying to follow a low-GI diet. There is a lack of evidence on the effect of potato consumption on obesity and diabetes. In the UK, potatoes are not considered by the National Health Service as counting or contributing towards the recommended daily five portions of fruit and vegetables, the 5-A-Day program. Toxicity Raw potatoes contain toxic glycoalkaloids, of which the most prevalent are solanine and chaconine. Solanine is found in other plants in the same family, Solanaceae, which includes such plants as deadly nightshade (Atropa belladonna), henbane (Hyoscyamus niger) and tobacco (Nicotiana spp.), as well as food plants like tomato. These compounds, which protect the potato plant from its predators, are especially concentrated in the aerial parts of the plant. The tubers are low in these toxins, unless they are exposed to light, which makes them go green. Exposure to light, physical damage, and age increase glycoalkaloid content within the tuber. Different potato varieties contain different levels of glycoalkaloids. The 'Lenape' variety, released in 1967, was withdrawn in 1970 as it contained high levels of glycoalkaloids. Since then, breeders of new varieties test for this, sometimes discarding an otherwise promising cultivar. Breeders try to keep glycoalkaloid levels below . However, when these commercial varieties turn green, their solanine concentrations can go well above this limit, with higher levels in the potato's skin. Uses Culinary Potato dishes vary around the world. Peruvian cuisine naturally contains the potato as a primary ingredient in many dishes, as around 3,000 varieties of the tuber are grown there. Chuño is a freeze-dried potato product traditionally made by Quechua and Aymara communities of Peru and Bolivia. In the UK, potatoes form part of the traditional dish fish and chips. Roast potatoes are commonly served as part of a Sunday roast dinner and mashed potatoes form a major component of several other traditional dishes, such as shepherd's pie, bubble and squeak, and bangers and mash. New potatoes may be cooked with mint and are often served with butter. In Germany, Northern Europe (Finland, Latvia and especially Scandinavian countries), Eastern Europe (Russia, Belarus and Ukraine) and Poland, newly harvested, early ripening varieties are considered a special delicacy. Boiled whole and served un-peeled with dill, these "new potatoes" are traditionally consumed with Baltic herring. Puddings made from grated potatoes (kugel, kugelis, and potato babka) are popular items of Ashkenazi, Lithuanian, and Belarusian cuisine. Cepelinai, the national dish of Lithuania, are dumplings made from boiled grated potatoes, usually stuffed with minced meat. In Italy, in the Friuli region, potatoes serve to make a type of pasta called gnocchi. Potato is used in northern China where rice is not easily grown, a popular dish being (qīng jiāo tǔ dòu sī), made with green pepper, vinegar and thin slices of potato. In the winter, roadside sellers in northern China sell roasted potatoes. Other uses Potatoes are sometimes used to brew alcoholic spirits such as vodka, poitín, akvavit, and brännvin. Potatoes are used as fodder for livestock. They may be made into silage which can be stored for some months before use. Potato starch is used in the food industry as a thickener and binder for soups and sauces, in the textile industry as an adhesive, and in the paper industry for the manufacturing of papers and boards. Potatoes are commonly used in plant research. The consistent parenchyma tissue, the clonal nature of the plant and the low metabolic activity make it an ideal model tissue for experiments on wound-response studies and electron transport. Cultural significance In mythology In Inca mythology, a daughter of the earth mother Pachamama, Axomamma, is the goddess of potatoes. She ensured the fertility of the soil and the growth of the tubers. According to Iroquois mythology, the first potatoes grew out of Earth Woman's feet after she died giving birth to her twin sons, Sapling and Flint. In art The potato has been an essential crop in the Andes since the pre-Columbian era. The Moche culture from Northern Peru made ceramics from the earth, water, and fire. This pottery was a sacred substance, formed in significant shapes and used to represent important themes. Potatoes are represented anthropomorphically as well as naturally. During the late 19th century, numerous images of potato harvesting appeared in European art, including the works of Willem Witsen and Anton Mauve. Van Gogh's 1885 painting The Potato Eaters portrays a family eating potatoes. Van Gogh said he wanted to depict peasants as they really were. He deliberately chose coarse and ugly models, thinking that they would be natural and unspoiled in his finished work. Jean-François Millet's The Potato Harvest depicts peasants working in the plains between Barbizon and Chailly. It presents a theme representative of the peasants' struggle for survival. Millet's technique for this work incorporated paste-like pigments thickly applied over a coarsely textured canvas. In popular culture Invented in 1949, and marketed and sold commercially by Hasbro in 1952, Mr. Potato Head is an American toy that consists of a plastic potato and attachable plastic parts, such as ears and eyes, to make a face. It was the first toy ever advertised on television. In the 2015 fictional film, The Martian, stranded astronaut and botanist, Mark Watney, cultivates potatoes in the artificial crew habitat using Martian soil fertilized with frozen feces, and produces water from unused rocket fuel.
Biology and health sciences
Food and drink
null
23535
https://en.wikipedia.org/wiki/Photon
Photon
A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless particles that can move no faster than the speed of light measured in vacuum. The photon belongs to the class of boson particles. As with other elementary particles, photons are best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While Planck was trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, he proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach. In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography. Nomenclature The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy was "made up of a completely determinate number of finite equal parts", which he called "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete energy quanta. He called these a light quantum (German: ein Lichtquant). The name photon derives from the Greek word for light, (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, who coined the term in a letter to Nature on 18 December 1926. The same name was used earlier but was never widely adopted before Lewis: in 1916 by the American physicist and psychologist Leonard T. Troland, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890–1993), and in 1926 by the French physicist Frithiof Wolfers (1891–1971). The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted by most physicists very soon after Compton used it. In physics, a photon is usually denoted by the symbol (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by , which is the photon energy, where is the Planck constant and the Greek letter (nu) is the photon's frequency. Physical properties The photon has no electric charge, is generally considered to have zero rest mass and is a stable particle. The experimental upper limit on the photon mass is very small, on the order of 10−50 kg; its lifetime would be more than 1018 years. For comparison the age of the universe is about years. In a vacuum, a photon has two possible polarization states. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, the photon obeys Bose–Einstein statistics, and not Fermi–Dirac statistics. That is, they do not obey the Pauli exclusion principle and more than one can occupy the same bound quantum state. Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation). Relativistic energy and momentum In empty space, the photon moves at (the speed of light) and its energy and momentum are related by , where is the magnitude of the momentum vector . This derives from the following relativistic relation, with : The energy and momentum of a photon depend only on its frequency () or inversely, its wavelength (): where is the wave vector, where   is the wave number, and   is the angular frequency, and   is the reduced Planck constant. Since points in the direction of the photon's propagation, the magnitude of its momentum is Polarization and spin angular momentum The photon also carries spin angular momentum, which is related to photon polarization. (Beams of light also exhibit properties described as orbital angular momentum of light). The angular momentum of the photon has two possible values, either or . These two possible values correspond to the two possible pure states of circular polarization. Collections of photons in a light beam may have mixtures of these two values; a linearly polarized light beam will act as if it were composed of equal numbers of the two possible angular momenta. The spin angular momentum of light does not depend on its frequency, and was experimentally verified by C. V. Raman and S. Bhagavantam in 1931. Antiparticle annihilation The collision of a particle with its antiparticle can create photons. In free space at least two photons must be created since, in the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon with opposite momentum, equal polarization, and 180° out of phase). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus. The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time. Experimental checks on photon mass Current commonly accepted physical theories imply or assume the photon to be strictly massless. If photons were not purely massless, their speeds would vary with frequency, with lower-energy (redder) photons moving slightly slower than higher-energy photons. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons. If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for precision tests of Coulomb's law. A null result of such an experiment has set a limit of . Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is large because the galactic magnetic field exists on great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term mAA would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of . The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of (the equivalent of ) given by the Particle Data Group. These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of from the test of Coulomb's law is valid. Historical development In most theories up to the eighteenth century, light was pictured as being made of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity. At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency is an integer multiple of an energy quantum As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics. Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question then, was how to unify Maxwell's wave theory of light with its experimentally observed particle nature. The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See and , below.) Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien, Planck and Millikan.) Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double slit has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001. The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics). Stimulated and spontaneous emission In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed. Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency, where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that and The and are collectively known as the Einstein coefficients. Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory. Quantum field theory Quantization of the electromagnetic field In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909. In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula. Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization. Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs. Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider. In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics. As a gauge boson The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian. The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states. In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally. Hadronic properties Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electrical charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon, which interacts only with electric charges, and vector mesons, which mediate the residual nuclear force. However, if experimentally probed at very short distances, the intrinsic structure of the photon appears to have as components a charge-neutral flux of quarks and gluons, quasi-free according to asymptotic freedom in QCD. That flux is described by the photon structure function. A review by presented a comprehensive comparison of data with theoretical predictions. Contributions to the mass of a system The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei). This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium. Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves. In matter Light that travels through transparent matter does so at a lower speed than c, the speed of light in vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polaritons. Polaritons have a nonzero effective mass, which means that they cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering. Photons can be scattered by matter. For example, photons scatter so many times in the solar radiative zone after leaving the core of the Sun that radiant energy takes about a million years to reach the convection zone. However, photons emitted from the sun's photosphere take only 8.3 minutes to reach Earth. Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry. Technological applications Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an important application and is discussed above under stimulated emission. Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas. Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations. Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy. In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins. Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1". Quantum optics and computation Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography. Two-photon physics studies interactions between photons, which are rare. In 2018, Massachusetts Institute of Technology researchers announced the discovery of bound photon triplets, which may involve polaritons.
Physical sciences
Physics
null
23542
https://en.wikipedia.org/wiki/Probability%20theory
Probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics or sequential estimation. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. History of probability The modern mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, and by Pierre de Fermat and Blaise Pascal in the seventeenth century (for example the "problem of points"). Christiaan Huygens published a book on the subject in 1657. In the 19th century, what is considered the classical definition of probability was completed by Pierre Laplace. Initially, probability theory mainly considered events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of variables into the theory. This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. This became the mostly undisputed axiomatic basis for modern probability theory; but, alternatives exist, such as the adoption of finite rather than countable additivity by Bruno de Finetti. Treatment Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more. Motivation Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the sample space of the experiment. The power set of the sample space (or equivalently, the event space) is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called events. In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that any of these events occurs is given by the sum of the probabilities of the events. The probability that any one of the events {1,6}, {3}, or {2,4} will occur is 5/6. This is the same as saying that the probability of event {1,2,3,4,6} is 5/6. This event encompasses the possibility of any number except five being rolled. The mutually exclusive event {5} has a probability of 1/6, and the event {1,2,3,4,5,6} has a probability of 1, that is, absolute certainty. When doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable. A random variable is a function that assigns to each elementary event in the sample space a real number. This function is usually denoted by a capital letter. In the case of a die, the assignment of a number to certain elementary events can be done using the identity function. This does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" () and to the outcome "tails" the number "1" (). Discrete probability distributions deals with events that occur in countable sample spaces. Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins. : Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an even number when a dice is rolled", the probability is given by , since 3 faces out of the 6 have even numbers and each face has the same probability of appearing. : The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by . It is then assumed that for each element , an intrinsic "probability" value is attached, which satisfies the following properties: That is, the probability function f(x) lies between zero and one for every value of x in the sample space Ω, and the sum of f(x) over all values x in the sample space Ω is equal to 1. An is defined as any subset of the sample space . The of the event is defined as So, the probability of the entire sample space is 1, and the probability of the null event is 0. The function mapping a point in the sample space to the "probability" value is called a abbreviated as . Continuous probability distributions deals with events that occur in a continuous sample space. : The classical definition breaks down when confronted with the continuous case. See Bertrand's paradox. : If the sample space of a random variable X is the set of real numbers () or a subset thereof, then a function called the () exists, defined by . That is, F(x) returns the probability that X will be less than or equal to x. The CDF necessarily satisfies the following properties. is a monotonically non-decreasing, right-continuous function; The random variable is said to have a continuous probability distribution if the corresponding CDF is continuous. If is absolutely continuous, then its derivative exists almost everywhere and integrating the derivative gives us the CDF back again. In this case, the random variable X is said to have a () or simply For a set , the probability of the random variable X being in is In case the PDF exists, this can be written as Whereas the PDF exists only for continuous random variables, the CDF exists for all random variables (including discrete random variables) that take values in These concepts can be generalized for multidimensional cases on and other continuous sample spaces. Measure-theoretic probability theory The utility of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two. An example of such distributions could be a mix of discrete and continuous distributions—for example, a random variable that is 0 with probability 1/2, and takes a random value from a normal distribution with probability 1/2. It can still be studied to some extent by considering it to have a PDF of , where is the Dirac delta function. Other distributions may not even be a mix, for example, the Cantor distribution has no positive probability for any single point, neither does it have a density. The modern approach to probability theory solves these problems using measure theory to define the probability space: Given any set (also called ) and a σ-algebra on it, a measure defined on is called a if If is the Borel σ-algebra on the set of real numbers, then there is a unique probability measure on for any CDF, and vice versa. The measure corresponding to a CDF is said to be by the CDF. This measure coincides with the pmf for discrete variables and PDF for continuous variables, making the measure-theoretic approach free of fallacies. The probability of a set in the σ-algebra is defined as where the integration is with respect to the measure induced by Along with providing better understanding and unification of discrete and continuous probabilities, measure-theoretic treatment also allows us to work on probabilities outside , as in the theory of stochastic processes. For example, to study Brownian motion, probability is defined on a space of functions. When it is convenient to work with a dominating measure, the Radon-Nikodym theorem is used to define a density as the Radon-Nikodym derivative of the probability distribution of interest with respect to this dominating measure. Discrete densities are usually defined as this derivative with respect to a counting measure over the set of all possible outcomes. Densities for absolutely continuous distributions are usually defined as this derivative with respect to the Lebesgue measure. If a theorem can be proved in this general setting, it holds for both discrete and continuous distributions as well as others; separate proofs are not required for discrete and continuous distributions. Classical probability distributions Certain random variables occur very often in probability theory because they well describe many natural or physical processes. Their distributions, therefore, have gained special importance in probability theory. Some fundamental discrete distributions are the discrete uniform, Bernoulli, binomial, negative binomial, Poisson and geometric distributions. Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions. Convergence of random variables In probability theory, there are several notions of convergence for random variables. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions. Weak convergence A sequence of random variables converges to the random variable if their respective CDF converges converges to the CDF of , wherever is continuous. Weak convergence is also called . Most common shorthand notation: Convergence in probability The sequence of random variables is said to converge towards the random variable if for every ε > 0. Most common shorthand notation: Strong convergence The sequence of random variables is said to converge towards the random variable if . Strong convergence is also known as . Most common shorthand notation: As the names indicate, weak convergence is weaker than strong convergence. In fact, strong convergence implies convergence in probability, and convergence in probability implies weak convergence. The reverse statements are not always true. Law of large numbers Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the . This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence. The (LLN) states that the sample average of a sequence of independent and identically distributed random variables converges towards their common expectation (expected value) , provided that the expectation of is finite. It is in the different forms of convergence of random variables that separates the weak and the strong law of large numbers Weak law: for Strong law: for It follows from the LLN that if an event of probability p is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards p. For example, if are independent Bernoulli random variables taking values 1 with probability p and 0 with probability 1-p, then for all i, so that converges to p almost surely. Central limit theorem The central limit theorem (CLT) explains the ubiquitous occurrence of the normal distribution in nature, and this theorem, according to David Williams, "is one of the great results of mathematics." The theorem states that the average of many independent and identically distributed random variables with finite variance tends towards a normal distribution irrespective of the distribution followed by the original random variables. Formally, let be independent random variables with mean and variance Then the sequence of random variables converges in distribution to a standard normal random variable. For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the Berry–Esseen theorem. For example, the distributions with finite first, second, and third moment from the exponential family; on the other hand, for some random variables of the heavy tail and fat tail variety, it works very slowly or may not work at all: in such cases one may use the Generalized Central Limit Theorem (GCLT).
Mathematics
Statistics and probability
null
23543
https://en.wikipedia.org/wiki/Probability%20distribution
Probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). More commonly, probability distributions are used to compare the relative occurrence of many different random values. Probability distributions can be defined in different ways and for discrete or for continuous variables. Distributions with special properties or for especially important applications are given specific names. Introduction A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often represented in notation by is the set of all possible outcomes of a random phenomenon being observed. The sample space may be any set: a set of real numbers, a set of descriptive labels, a set of vectors, a set of arbitrary non-numerical values, etc. For example, the sample space of a coin flip could be To define probability distributions for the specific case of random variables (so the sample space can be seen as a numeric set), it is common to distinguish between discrete and absolutely continuous random variables. In the discrete case, it is sufficient to specify a probability mass function assigning a probability to each possible outcome (e.g. when throwing a fair die, each of the six digits to , corresponding to the number of dots on the die, has the probability The probability of an event is then defined to be the sum of the probabilities of all outcomes that satisfy the event; for example, the probability of the event "the die rolls an even value" is In contrast, when a random variable takes values from a continuum then by convention, any individual outcome is assigned probability zero. For such continuous random variables, only events that include infinitely many outcomes such as intervals have probability greater than 0. For example, consider measuring the weight of a piece of ham in the supermarket, and assume the scale can provide arbitrarily many digits of precision. Then, the probability that it weighs exactly 500g must be zero because no matter how high the level of precision chosen, it cannot be assumed that there are no non-zero decimal digits in the remaining omitted digits ignored by the precision level. However, for the same use case, it is possible to meet quality control requirements such as that a package of "500 g" of ham must weigh between 490 g and 510 g with at least 98% probability. This is possible because this measurement does not require as much precision from the underlying equipment. Absolutely continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., for some The cumulative distribution function is the area under the probability density function from to as shown in figure 1. General probability definition Let be a probability space, be a measurable space, and be a -valued random variable. Then the probability distribution of is the pushforward measure of the probability measure onto induced by . Explicitly, this pushforward measure on is given by for Any probability distribution is a probability measure on (in general different from , unless happens to be the identity map). A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for absolutely continuous and discrete variables, is by means of a probability function whose input space is a σ-algebra, and gives a real number probability as its output, particularly, a number in . The probability function can take as argument subsets of the sample space itself, as in the coin toss example, where the function was defined so that and . However, because of the widespread use of random variables, which transform the sample space into a set of numbers (e.g., , ), it is more common to study probability distributions whose argument are subsets of these particular kinds of sets (number sets), and all probability distributions discussed in this article are of this type. It is common to denote as the probability that a certain value of the variable belongs to a certain event . The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is: , so the probability is non-negative , so no probability exceeds for any countable disjoint family of sets The concept of probability function is made more rigorous by defining it as the element of a probability space , where is the set of possible outcomes, is the set of all subsets whose probability can be measured, and is the probability function, or probability measure, that assigns a probability to each of these measurable subsets . Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, absolutely continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the absolutely continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function. The normal distribution is a commonly encountered absolutely continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various different values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. A commonly encountered multivariate distribution is the multivariate normal distribution. Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function. Terminology Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below. Basic terms Random variable: takes values from a sample space; probabilities describe which values and set of values are taken more likely. Event: set of possible values (outcomes) of a random variable that occurs with a certain probability. Probability function or probability measure: describes the probability that the event occurs. Cumulative distribution function: function evaluating the probability that will take a value less than or equal to for a random variable (only for real-valued random variables). Quantile function: the inverse of the cumulative distribution function. Gives such that, with probability , will not exceed . Discrete probability distributions Discrete probability distribution: for many random variables with finitely or countably infinitely many values. Probability mass function (pmf): function that gives the probability that a discrete random variable is equal to some value. Frequency distribution: a table that displays the frequency of various outcomes . Relative frequency distribution: a frequency distribution where each value has been divided (normalized) by a number of outcomes in a sample (i.e. sample size). Categorical distribution: for discrete random variables with a finite set of values. Absolutely continuous probability distributions Absolutely continuous probability distribution: for many random variables with uncountably many values. Probability density function (pdf) or probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. Related terms Support: set of values that can be assumed with non-zero probability (or probability density in the case of a continuous distribution) by the random variable. For a random variable , it is sometimes denoted as . Tail: the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form , or a union thereof. Head: the region where the pmf or pdf is relatively high. Usually has the form . Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof. Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half. Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak. Quantile: the q-quantile is the value such that . Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution. Standard deviation: the square root of the variance, and hence another measure of dispersion. Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right. Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution. Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution. Cumulative distribution function In the special case of a real-valued random variable, the probability distribution can equivalently be represented by a cumulative distribution function instead of a probability measure. The cumulative distribution function of a random variable with regard to a probability distribution is defined as The cumulative distribution function of any real-valued random variable has the properties: is non-decreasing; is right-continuous; ; and ; and . Conversely, any function that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers. Any probability distribution can be decomposed as the mixture of a discrete, an absolutely continuous and a singular continuous distribution, and thus any cumulative distribution function admits a decomposition as the convex sum of the three according cumulative distribution functions. Discrete probability distribution A discrete probability distribution is the probability distribution of a random variable that can take on only a countable number of values (almost surely) which means that the probability of any event can be expressed as a (finite or countably infinite) sum: where is a countable set with . Thus the discrete random variables (i.e. random variables whose probability distribution is discrete) are exactly those with a probability mass function . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1. For example, if for , the sum of probabilities would be . Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, the negative binomial distribution and categorical distribution. When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices. Cumulative distribution function A real-valued discrete random variable can equivalently be defined as a random variable whose cumulative distribution function increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant in intervals without jumps. The points where jumps occur are precisely the values which the random variable may take. Thus the cumulative distribution function has the form The points where the cdf jumps always form a countable set; this may be any countable set and thus may even be dense in the real numbers. Dirac delta representation A discrete probability distribution is often represented with Dirac measures, the probability distributions of deterministic random variables. For any outcome , let be the Dirac measure concentrated at . Given a discrete probability distribution, there is a countable set with and a probability mass function . If is any event, then or in short, Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function , where which means for any event Indicator-function representation For a discrete random variable , let be the values it can take with non-zero probability. Denote These are disjoint sets, and for such sets It follows that the probability that takes any value except for is zero, and thus one can write as except on a set of probability zero, where is the indicator function of . This may serve as an alternative definition of discrete random variables. One-point distribution A special case is the discrete distribution of a random variable that can take on only one fixed value; in other words, it is a deterministic distribution. Expressed formally, the random variable has a one-point distribution if it has a possible outcome such that All other possible outcomes then have probability 0. Its cumulative distribution function jumps immediately from 0 to 1. Absolutely continuous probability distribution An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable has an absolutely continuous probability distribution if there is a function such that for each interval the probability of belonging to is given by the integral of over : This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function. In particular, the probability for to take any single value (that is, ) is zero, because an integral with coinciding upper and lower limits is always equal to zero. If the interval is replaced by any measurable set , the according equality still holds: An absolutely continuous random variable is a random variable whose probability distribution is absolutely continuous. There are many examples of absolutely continuous probability distributions: normal, uniform, chi-squared, and others. Cumulative distribution function Absolutely continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function. In this case, the cumulative distribution function has the form where is a density of the random variable with regard to the distribution . Note on terminology: Absolutely continuous distributions ought to be distinguished from continuous distributions, which are those having a continuous cumulative distribution function. Every absolutely continuous distribution is a continuous distribution but the inverse is not true, there exist singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. Some authors however use the term "continuous distribution" to denote all distributions whose cumulative distribution function is absolutely continuous, i.e. refer to absolutely continuous distributions as continuous distributions. For a more general definition of density functions and the equivalent absolutely continuous measures see absolutely continuous measure. Kolmogorov definition In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function from a probability space to a measurable space . Given that probabilities of events of the form satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure of , which is a probability measure on satisfying . Other kinds of distributions Absolutely continuous and discrete distributions with support on or are extremely useful to model a myriad of phenomena, since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls. However, this is not always the case, and there exist phenomena with supports that are actually complicated curves within some space or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it. One example is shown in the figure to the right, which displays the evolution of a system of differential equations (commonly known as the Rabinovich–Fabrikant equations) that can be used to model the behaviour of Langmuir waves in plasma. When this phenomenon is studied, the observed states from the subset are as indicated in red. So one could ask what is the probability of observing a state in a certain position of the red subset; if such a probability exists, it is called the probability measure of the system. This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following. Let be instants in time and a subset of the support; if the probability measure exists for the system, one would expect the frequency of observing states inside set would be equal in interval and , which might not happen; for example, it could oscillate similar to a sine, , whose limit when does not converge. Formally, the measure exists only if the limit of the relative frequency converges when the system is observed into the infinite future. The branch of dynamical systems that studies the existence of a probability measure is ergodic theory. Note that even in these cases, the probability distribution, if it exists, might still be termed "absolutely continuous" or "discrete" depending on whether the support is uncountable or countable, respectively. Random number generation Most algorithms are based on a pseudorandom number generator that produces numbers that are uniformly distributed in the half-open interval . These random variates are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated. For example, suppose has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some , we define so that This random variable X has a Bernoulli distribution with parameter . This is a transformation of discrete random variable. For a distribution function of an absolutely continuous random variable, an absolutely continuous random variable must be constructed. , an inverse function of , relates to the uniform variable : For example, suppose a random variable that has an exponential distribution must be constructed. so and if has a distribution, then the random variable is defined by . This has an exponential distribution of . A frequent problem in statistical simulations (the Monte Carlo method) is the generation of pseudo-random numbers that are distributed in a given way. Common probability distributions and their applications The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics, many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate. The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, absolutely continuous, multivariate, etc.) All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution. Linear growth (e.g. errors, offsets) Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used absolutely continuous distribution Exponential growth (e.g. prices, incomes, populations) Log-normal distribution, for a single such quantity whose log is normally distributed Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution Uniformly distributed quantities Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair dice) Continuous uniform distribution, for absolutely continuously distributed values Bernoulli trials (yes/no events, with a given probability) Basic distributions: Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no) Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution Related to sampling schemes over a finite population: Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement) Categorical outcomes (events with possible outcomes) Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution Poisson process (events that occur independently with a given rate) Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time Exponential distribution, for the time before the next Poisson-type event occurs Gamma distribution, for the time before the next k Poisson-type events occur Absolute values of vectors with normally distributed components Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components. Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals. Normally distributed quantities operated with sum of squares Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test) Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test) F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient) As conjugate prior distributions in Bayesian inference Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc. Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution Some specialized applications of probability distributions The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions. In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule). Therefore, the probability distribution function of the position of a particle is described by , probability that the particle's position will be in the interval in dimension one, and a similar triple integral in dimension three. This is a key principle of quantum mechanics. Probabilistic load flow in power-flow study explains the uncertainties of input variables as probability distribution and provides the power flow calculation also in term of probability distribution. Prediction of natural phenomena occurrences based on previous frequency distributions such as tropical cyclones, hail, time in between events, etc. Fitting
Mathematics
Statistics and probability
null
23551
https://en.wikipedia.org/wiki/Perciformes
Perciformes
Perciformes (), also called the Acanthopteri, is an order or superorder of ray-finned fish in the clade Percomorpha. Perciformes means "perch-like". Among the well-known members of this group are perches and darters (Percidae), and also sea basses and groupers (Serranidae). Taxonomy Formerly, this group was thought to be even more diverse than it is thought to be now, containing about 41% of all bony fish (about 10,000 species) and about 160 families, which is the most of any order within the vertebrates. However, many of these other families have since been reclassified within their own orders within the clade Percomorpha, significantly reducing the size of the group. In contrast to this splitting, other groups formerly considered distinct, such as the Scorpaeniformes, are now classified in the Perciformes. Evolution The earliest fossil perciform is the extinct serranid Paleoserranus from the Early Paleocene of Mexico, but potential records of "percoids" are known from the Maastrichtian, including Eoserranus, Indiaichthys, and Prolates, although their exact taxonomic identity remains uncertain. Characteristics The dorsal and anal fins are divided into anterior spiny and posterior soft-rayed portions, which may be partially or completely separated. The pelvic fins usually have one spine and up to five soft rays, positioned unusually far forward under the chin or under the belly. Scales are usually ctenoid (rough to the touch), although sometimes they are cycloid (smooth to the touch) or otherwise modified. Classification Classification of this group is controversial. As traditionally defined before the introduction of cladistics, the Perciformes are almost certainly paraphyletic. Other orders that should possibly be included as suborders are the Scorpaeniformes, Tetraodontiformes, and Pleuronectiformes. Of the presently recognized suborders, several may be paraphyletic, as well. These are grouped by suborder/superfamily, generally following the text Fishes of the World.
Biology and health sciences
Acanthomorpha
null
23560
https://en.wikipedia.org/wiki/Proboscidea
Proboscidea
Proboscidea (; , ) is a taxonomic order of afrotherian mammals containing one living family (Elephantidae) and several extinct families. First described by J. Illiger in 1811, it encompasses the elephants and their close relatives. Three living species of elephant are currently recognised: the African bush elephant, the African forest elephant, and the Asian elephant. Extinct members of Proboscidea include the deinotheres, mastodons, gomphotheres and stegodonts. The family Elephantidae also contains several extinct groups, including mammoths and Palaeoloxodon. Proboscideans include some of the largest known land mammals, with the elephant Palaeoloxodon namadicus and mastodon "Mammut" borsoni suggested to have body masses surpassing , rivalling or exceeding paraceratheres (the otherwise largest known land mammals) in size. The largest extant proboscidean is the African bush elephant, with a world record of size of at the shoulder and . In addition to their enormous size, later proboscideans are distinguished by tusks and long, muscular trunks, which were less developed or absent in early proboscideans. Evolution Over 180 extinct members of Proboscidea have been described. The earliest proboscideans, Eritherium and Phosphatherium are known from the late Paleocene of Africa. The Eocene included Numidotherium, Moeritherium and Barytherium from Africa. These animals were relatively small and some, like Moeritherium and Barytherium were probably amphibious. A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18-19 million years ago allowing proboscideans to disperse from their African homeland across Eurasia, and later, around 16-15 million years ago into North America across the Bering Land Bridge. Proboscidean groups prominent during the Miocene include the deinotheres, along with the more advanced elephantimorphs, including mammutids (mastodons), gomphotheres, amebelodontids (which includes the "shovel tuskers" like Platybelodon), choerolophodontids and stegodontids. Around 10 million years ago, the earliest members of the family Elephantidae emerged in Africa, having originated from gomphotheres. The Late Miocene saw major climatic changes, which resulted in the decline and extinction of many proboscidean groups such as amebelodontids and choerolophodontids. The earliest members of modern genera of Elephantidae appeared during the latest Miocene-early Pliocene around 6-5 million years ago. The elephantid genera Elephas (which includes the living Asian elephant) and Mammuthus (mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago. Over the course of the Early Pleistocene, all non-elephantid probobscideans outside of the Americas became extinct (including mammutids, gomphotheres and deinotheres), with the exception of Stegodon. Gomphotheres dispersed into South America during this era as part of the Great American interchange, and mammoths migrating into North America around 1.5 million years ago. At the end of the Early Pleistocene, around 800,000 years ago the elephantid genus Palaeoloxodon dispersed outside of Africa, becoming widely distributed in Eurasia. By the beginning of the Late Pleistocene, proboscideans were represented by around 23 species. Proboscideans underwent a dramatic decline during the Late Pleistocene as part of the Late Pleistocene megafauna extinctions, with all remaining non-elephantid proboscideans (including Stegodon, mastodons, and the American gomphotheres Cuvieronius and Notiomastodon) and Palaeoloxodon becoming extinct, with mammoths only surviving in relict populations on islands around the Bering Strait into the Holocene, with their latest survival being on Wrangel Island around 4,000 years ago. The following cladogram is based on endocasts. Morphology Over the course of their evolution, proboscideans experienced a significant increase in body size. Some members of the families Deinotheriidae, Mammutidae, Stegodontidae and Elephantidae are thought to have exceeded modern elephants in size, with shoulder heights over and masses over , with average fully grown males of the mammutid "Mammut" borsoni having an estimated body mass of , making it one the largest and perhaps the largest land mammal ever, with a fragmentary specimen of the Indian elephant species Palaeoloxodon namadicus only known from a partial femur being speculatively estimated in the same study to have possibly reached a body mass of . As with other megaherbivores, including the extinct sauropod dinosaurs, the large size of proboscideans likely developed to allow them to survive on vegetation with low nutritional value. Their limbs grew longer and the feet shorter and broader. The feet were originally plantigrade and developed into a digitigrade stance with cushion pads and the sesamoid bone providing support, with this change developing around the common ancestor of Deinotheriidae and Elephantiformes. Members of Elephantiformes which have retracted nasal regions of the skull indicating the development of a trunk, as well as well-developed tusks on the upper and lower jaws. The skull grew larger, especially the cranium, while the neck shortened to provide better support for the skull. The increase in size led to the development and elongation of the mobile trunk to provide reach. The number of premolars, incisors and canines decreased. The cheek teeth (molars and premolars) became larger and more specialised. In Elephantiformes, the second upper incisor and lower incisor were transformed into ever growing tusks. The tusks are proportionally heavy for their size, being primarily composed of dentine. In primitive proboscideans, a band of enamel covers part of the tusk surface, though in many later groups including modern elephants the band is lost, with elephants only having enamel on the tusk tips of juveniles. The upper tusks were initially modest in size, but from the Late Miocene onwards proboscideans developed increasingly large tusks, with the longest ever recorded tusk being long belonging to the mammutid "Mammut" borsoni found in Greece, with some mammoth tusks likely weighing over . The lower tusks are generally smaller than the upper tusks, but could grow to large sizes in some species, like in Deinotherium (which lacks upper tusks), where they could grow over long, the amebelodontid Konobelodon has lower tusks long, with the longest lower tusks ever recorded being from the primitive elephantid Stegotetrabelodon which are around long. The molar teeth changed from being replaced vertically as in other mammals to being replaced horizontally in the clade Elephantimorpha. While early Elephantimorpha generally had lower jaws with an elongated mandibular symphysis at the front of the jaw with well developed lower tusks/incisors, from the Late Miocene onwards, many groups convergently developed brevirostrine (shortened) lower jaws with vestigial or no lower tusks. Elephantids are distinguished from other proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher crowned (hypsodont) and more efficient in consuming grass. Dwarfism Several species of proboscideans lived on islands and experienced insular dwarfism. This occurred primarily during the Pleistocene, when some elephant populations became isolated by fluctuating sea levels, although dwarf elephants did exist earlier in the Pliocene. These elephants likely grew smaller on islands due to a lack of large or viable predator populations and limited resources. By contrast, small mammals such as rodents develop gigantism in these conditions. Dwarf proboscideans are known to have lived in Indonesia, the Channel Islands of California, and several islands of the Mediterranean. Elephas celebensis of Sulawesi is believed to have descended from Elephas planifrons. Elephas falconeri of Malta and Sicily was only , and had probably evolved from the straight-tusked elephant. Other descendants of the straight-tusked elephant existed in Cyprus. Dwarf elephants of uncertain descent lived in Crete, Cyclades and Dodecanese, while dwarf mammoths are known to have lived in Sardinia. The Columbian mammoth colonised the Channel Islands and evolved into the pygmy mammoth. This species reached a height of and weighed . A population of small woolly mammoths survived on Wrangel Island as recently as 4,000 years ago. After their discovery in 1993, they were considered dwarf mammoths. This classification has been re-evaluated and since the Second International Mammoth Conference in 1999, these animals are no longer considered to be true "dwarf mammoths". Ecology It has been suggested that members of Elephantimorpha, including mammutids, gomphotheres, and stegodontids, lived in herds like modern elephants. Analysis of remains of the American mastodon (Mammut americanum) suggest that like modern elephants, that herds consisted of females and juveniles and that adult males lived solitarily or in small groups, and that adult males periodically engaged in fights with other males during periods similar to musth found in living elephants. These traits are suggested to be inherited from the last common ancestor of elephantimorphs, with musth-like behaviour also suggested to have occurred in gomphotheres. All elephantimorphs are suggested to have been capable of communication via infrasound, as found in living elephants. Deinotheres may have also lived in herds, based on tracks found in the Late Miocene of Romania. Over the course of the Neogene and Pleistocene, various members of Elephantida shifted from a browse-dominated diet towards mixed feeding or grazing. Classification Below is a taxonomy of proboscidean genera as of 2019. Order Proboscidea Illiger, 1811 †Eritherium Gheerbrant, 2009 †Moeritherium Andrews, 1901 †Saloumia Tabuce et al., 2019 †Family Numidotheriidae Shoshani & Tassy, 1992 †Phosphatherium Gheerbrant et al., 1996 †Arcanotherium Delmer, 2009 †Daouitherium Gheerbrant & Sudre, 2002 †Numidotherium Mahboubi et al., 1986 †Family Barytheriidae Andrews, 1906 †Omanitherium Seiffert et al., 2012 †Barytherium Andrews, 1901 †Family Deinotheriidae Bonaparte, 1845 †Chilgatherium Sanders et al., 2004 †Prodeinotherium Ehik, 1930 †Deinotherium Kaup, 1829 Suborder Elephantiformes Tassy, 1988 †Eritreum Shoshani et al., 2006 †Hemimastodon Pilgrim, 1912 †Palaeomastodon Andrews, 1901 †Phiomia Andrews & Beadnell, 1902 Infraorder Elephantimorpha Tassy & Shoshani, 1997 †Family Mammutidae Hay, 1922 †Losodokodon Rasmussen & Gutierrez, 2009 †Eozygodon Tassy & Pickford, 1983 †Zygolophodon Vacek, 1877 †Sinomammut Mothé et al., 2016 †Mammut Blumenbach, 1799 Parvorder Elephantida Tassy & Shoshani, 1997 †Family Choerolophodontidae Gaziry, 1976 †Afrochoerodon Pickford, 2001 †Choerolophodon Schlesinger, 1917 †Family Amebelodontidae Barbour, 1927 †Afromastodon Pickford, 2003 †Progomphotherium Pickford, 2003 †Eurybelodon Lambert, 2016 †Serbelodon Frick, 1933 †Archaeobelodon Tassy, 1984 †Protanancus Arambourg, 1945 †Amebelodon Barbour, 1927 †Konobelodon Lambert, 1990 †Torynobelodon Barbour, 1929 †Aphanobelodon Wang et al., 2016 †Platybelodon Borissiak, 1928 †Family Gomphotheriidae Hay, 1922 (paraphyletic) "trilophodont gomphotheres" †Gomphotherium Burmeister, 1837 †Blancotherium May, 2019 †Gnathabelodon Barbour & Sternberg, 1935 †Eubelodon Barbour, 1914 †Stegomastodon Pohlig, 1912 †Sinomastodon Tobien et al., 1986 †Notiomastodon Cabrera, 1929 †Rhynchotherium Falconer, 1868 †Cuvieronius Osborn, 1923 "tetralophodont gomphotheres" †Anancus Aymard, 1855 †Paratetralophodon Tassy, 1983 †Pediolophodon Lambert, 2007 †Tetralophodon Falconer, 1857 Superfamily Elephantoidea Gray, 1821 †Family Stegodontidae Osborn, 1918 †Stegolophodon Schlesinger, 1917 †Stegodon Falconer, 1857 Family Elephantidae Gray, 1821 †Stegodibelodon Coppens, 1972 †Stegotetrabelodon Petrocchi, 1941 †Selenotherium Mackaye, Brunet & Tassy, 2005 †Primelephas Maglio, 1970 Loxodonta Anonymous, 1827 †Palaeoloxodon Matsumoto, 1924 †Mammuthus Brookes, 1828 Elephas Linnaeus, 1758
Biology and health sciences
Proboscidea
null
23561
https://en.wikipedia.org/wiki/Paranthropus
Paranthropus
Paranthropus is a genus of extinct hominin which contains two widely accepted species: P. robustus and P. boisei. However, the validity of Paranthropus is contested, and it is sometimes considered to be synonymous with Australopithecus. They are also referred to as the robust australopithecines. They lived between approximately 2.9 and 1.2 million years ago (mya) from the end of the Pliocene to the Middle Pleistocene. Paranthropus is characterised by robust skulls, with a prominent gorilla-like sagittal crest along the midline—which suggest strong chewing muscles—and broad, herbivorous teeth used for grinding. However, they likely preferred soft food over tough and hard food. Typically, Paranthropus species were generalist feeders, but while P. robustus was likely an omnivore, P. boisei seems to have been herbivorous, possibly preferring abundant bulbotubers. Paranthropoids were bipeds. Despite their robust heads, they had comparatively small bodies. Average weight and height are estimated to be at for P. robustus males, at for P. boisei males, at for P. robustus females, and at for P. boisei females. They were possibly polygamous and patrilocal, but there are no modern analogues for australopithecine societies. They are associated with bone tools and contested as the earliest evidence of fire usage. They typically inhabited woodlands, and coexisted with some early human species, namely A. africanus, H. habilis and H. erectus. They were preyed upon by the large carnivores of the time, specifically crocodiles, leopards, sabertoothed cats and hyenas. Taxonomy Species P. robustus The genus Paranthropus was first erected by Scottish-South African palaeontologist Robert Broom in 1938, with the type species P. robustus. "Paranthropus" derives from Ancient Greek παρα para beside or alongside; and άνθρωπος ánthropos man. The type specimen, a male braincase, TM 1517, was discovered by schoolboy Gert Terblanche at the Kromdraai fossil site, about southwest of Pretoria, South Africa. By 1988, at least six individuals were unearthed in around the same area, now known as the Cradle of Humankind. In 1948, at Swartkrans Cave, in about the same vicinity as Kromdraai, Broom and South African palaeontologist John Talbot Robinson described P. crassidens based on a subadult jaw, SK 6. He believed later Paranthropus were morphologically distinct from earlier Paranthropus in the cave—that is, the Swartkrans Paranthropus were reproductively isolated from Kromdraai Paranthropus and the former eventually speciated. By 1988, several specimens from Swartkrans had been placed into P. crassidens. However, this has since been synonymised with P. robustus as the two populations do not seem to be very distinct. P. boisei In 1959, P. boisei was discovered by Mary Leakey at Olduvai Gorge, Tanzania (specimen OH 5). Her husband Louis named it Zinjanthropus boisei because he believed it differed greatly from Paranthropus and Australopithecus. The name derives from "Zinj", an ancient Arabic word for the coast of East Africa, and "boisei", referring to their financial benefactor Charles Watson Boise. However, this genus was rejected at Mr. Leakey's presentation before the 4th Pan-African Congress on Prehistory, as it was based on a single specimen. The discovery of the Peninj Mandible made the Leakeys reclassify their species as Australopithecus (Zinjanthropus) boisei in 1964, but in 1967, South African palaeoanthropologist Phillip V. Tobias subsumed it into Australopithecus as A. boisei. However, as more specimens were found, the combination Paranthropus boisei became more popular. It is debated whether the wide range of variation in jaw size indicates simply sexual dimorphism or a grounds for identifying a new species. It could be explained as groundmass filling in cracks naturally formed after death, inflating the perceived size of the bone. P. boisei also has a notably wide range of variation in skull anatomy, but these features likely have no taxonomic bearing. P. aethiopicus In 1968, French palaeontologists Camille Arambourg and Yves Coppens described "Paraustralopithecus aethiopicus" based on a toothless mandible from the Shungura Formation, Ethiopia (Omo 18). In 1976, American anthropologist Francis Clark Howell and Breton anthropologist Yves Coppens reclassified it as A. africanus. In 1986, after the discovery of the skull KNM WT 17000 by English anthropologist Alan Walker and Richard Leakey classified it into Paranthropus as P. aethiopicus. There is debate whether this is synonymous with P. boisei, the main argument for separation being the skull seems less adapted for chewing tough vegetation. In 1989, palaeoartist and zoologist Walter Ferguson reclassified KNM WT 17000 into a new species, walkeri, because he considered the skull's species designation questionable as it comprised the skull whereas the holotype of P. aethiopicus comprised only the mandible. Ferguson's classification is almost universally ignored, and is considered to be synonymous with P. aethiopicus. Others In 2015, Ethiopian palaeoanthropologist Yohannes Haile-Selassie and colleagues described the 3.5–3.2 Ma A. deyiremeda based on three jawbones from the Afar Region, Ethiopia. They noted that, though it shares many similarities with Paranthropus, it may not have been closely related because it lacked enlarged molars which characterize the genus. Nonetheless, in 2018, independent researcher Johan Nygren recommended moving it to Paranthropus based on dental and presumed dietary similarity. Validity In 1951, American anthropologists Sherwood Washburn and Bruce D. Patterson were the first to suggest that Paranthropus should be considered a junior synonym of Australopithecus as the former was only known from fragmentary remains at the time, and dental differences were too minute to serve as justification. In face of calls for subsumation, Leakey and Robinson continued defending its validity. Various other authors were still unsure until more complete remains were found. Paranthropus is sometimes classified as a subgenus of Australopithecus. There is currently no clear consensus on the validity of Paranthropus. The argument rests upon whether the genus is monophyletic—is composed of a common ancestor and all of its descendants—and the argument against monophyly (that the genus is paraphyletic) says that P. robustus and P. boisei evolved similar gorilla-like heads independently of each other by coincidence (convergent evolution), as chewing adaptations in hominins evolve very rapidly and multiple times at various points in the family tree (homoplasy). In 1999, a chimp-like ulna forearm bone was assigned to P. boisei, the first discovered ulna of the species, which was markedly different from P. robustus ulnae, which could suggest paraphyly. Evolution P. aethiopicus is the earliest member of the genus, with the oldest remains, from the Ethiopian Omo Kibish Formation, dated to 2.6 mya at the end of the Pliocene. It is sometimes regarded as the direct ancestor of P. boisei and P. robustus. It is possible that P. aethiopicus evolved even earlier, up to 3.3 mya, on the expansive Kenyan floodplains of the time. The oldest P. boisei remains date to about 2.3 mya from Malema, Malawi. P. boisei changed remarkably little over its nearly one-million-year existence. Paranthropus had spread into South Africa by 2 mya with the earliest P. robustus remains. It is sometimes suggested that Paranthropus and Homo are sister taxa, both evolving from Australopithecus. This may have occurred during a drying trend 2.8–2.5 mya in the Great Rift Valley, which caused the retreat of woodland environments in favor of open savanna, with forests growing only along rivers and lakes. Homo evolved in the former, and Paranthropus in the latter riparian environment. However, the classifications of Australopithecus species is problematic. Evolutionary tree according to a 2019 study: Description Skull Paranthropus had a massively built, tall and flat skull, with a prominent gorilla-like sagittal crest along the midline which anchored large temporalis muscles used in chewing. Like other australopithecines, Paranthropus exhibited sexual dimorphism, with males notably larger than females. They had large molars with a relatively thick tooth enamel coating (post-canine megadontia), and comparatively small incisors (similar in size to modern humans), possibly adaptations to processing abrasive foods. The teeth of P. aethiopicus developed faster than those of P. boisei. Paranthropus had adaptations to the skull to resist large bite loads while feeding, namely the expansive squamosal sutures. The notably thick palate was once thought to have been an adaptation to resist a high bite force, but is better explained as a byproduct of facial lengthening and nasal anatomy. In P. boisei, the jaw hinge was adapted to grinding food side-to-side (rather than up-and-down in modern humans), which is better at processing the starchy abrasive foods that likely made up the bulk of its diet. P. robustus may have chewed in a front-to-back direction instead, and had less exaggerated (less derived) anatomical features than P. boisei as it perhaps did not require them with this kind of chewing strategy. This may have also allowed P. robustus to better process tougher foods. The braincase volume averaged about , comparable to gracile australopithecines, but smaller than Homo. Modern human brain volume averages for men and for women. Limbs and locomotion Unlike P. robustus, the forearms of P. boisei were heavily built, which might suggest habitual suspensory behaviour as in orangutans and gibbons. A P. boisei shoulder blade indicates long infraspinatus muscles, which is also associated with suspensory behavior. A P. aethiopicus ulna, on the other hand, shows more similarities to Homo than P. boisei. Paranthropus were bipeds, and their hips, legs and feet resemble A. afarensis and modern humans. The pelvis is similar to A. afarensis, but the hip joints are smaller in P. robustus. The physical similarity implies a similar walking gait. Their modern-humanlike big toe indicates a modern-humanlike foot posture and range of motion, but the more distal ankle joint would have inhibited the modern human toe-off gait cycle. By 1.8 mya, Paranthropus and H. habilis may have achieved about the same grade of bipedality. Height and weight In comparison to the large, robust head, the body was rather small. Average weight for P. robustus may have been for males and for females; and for P. boisei for males and for females. At Swartkrans Cave Members 1 and 2, about 35% of the P. robustus individuals are estimated to have weighed , 22% about , and the remaining 43% bigger than the former but less than . At Member 3, all individuals were about . Female weight was about the same in contemporaneous H. erectus, but male H. erectus were on average heavier than P. robustus males. P. robustus sites are oddly dominated by small adults, which could be explained as heightened predation or mortality of the larger males of a group. The largest-known Paranthropus individual was estimated at . According to a 1991 study, based on femur length and using the dimensions of modern humans, male and female P. robustus are estimated to have stood on average , respectively, and P. boisei . However, the latter estimates are problematic as there were no positively identified male P. boisei femurs at the time. In 2013, a 1.34 Ma male P. boisei partial skeleton was estimated to be at least and . Pathology Paranthropus seems to have had notably high rates of pitting enamel hypoplasia (PEH), where tooth enamel formation is spotty instead of mostly uniform. In P. robustus, about 47% of baby teeth and 14% of adult teeth were affected, in comparison to about 6.7% and 4.3%, respectively, in any other tested hominin species. The condition of these holes covering the entire tooth is consistent with the modern human ailment amelogenesis imperfecta. However, since circular holes in enamel coverage are uniform in size, only present on the molar teeth, and have the same severity across individuals, the PEH may have been a genetic condition. It is possible that the coding-DNA concerned with thickening enamel also left them more vulnerable to PEH. There have been 10 identified cases of cavities in P. robustus, indicating a rate similar to modern humans. A molar from Drimolen, South Africa, showed a cavity on the tooth root, a rare occurrence in fossil great apes. In order for cavity-creating bacteria to reach this area, the individual would have had to have also presented either alveolar resportion, which is commonly associated with gum disease; or super-eruption of teeth which occurs when teeth become worn down and have to erupt a bit more in order to maintain a proper bite, and this exposed the root. The latter is most likely, and the exposed root seems to have caused hypercementosis to anchor the tooth in place. The cavity seems to have been healing, which may have been caused by a change in diet or mouth microbiome, or the loss of the adjacent molar. Palaeobiology Diet It was once thought P. boisei cracked open nuts with its powerful teeth, giving OH 5 the nickname "Nutcracker Man". However, like gorillas, Paranthropus likely preferred soft foods, but would consume tough or hard food during leaner times, and the powerful jaws were used only in the latter situation. In P. boisei, thick enamel was more likely used to resist abrasive gritty particles rather than to minimize chipping while eating hard foods. In fact, there is a distinct lack of tooth fractures which would have resulted from such activity. Paranthropus were generalist feeders, but diet seems to have ranged dramatically with location. The South African P. robustus appears to have been an omnivore, with a diet similar to contemporaneous Homo and nearly identical to the later H. ergaster, and subsisted on mainly C4 savanna plants and C3 forest plants, which could indicate either seasonal shifts in diet or seasonal migration from forest to savanna. In leaner times it may have fallen back on brittle food. It likely also consumed seeds and possibly tubers or termites. A high cavity rate could indicate honey consumption. The East African P. boisei, on the other hand, seems to have been largely herbivorous and fed on C4 plants. Its powerful jaws allowed it to consume a wide variety of different plants, though it may have largely preferred nutrient-rich bulbotubers as these are known to thrive in the well-watered woodlands it is thought to have inhabited. Feeding on these, P. boisei may have been able to meet its daily caloric requirements of approximately 9,700 kJ after about 6 hours of foraging. Juvenile P. robustus may have relied more on tubers than adults, given the elevated levels of strontium compared to adults in teeth from Swartkrans Cave, which, in the area, was most likely sourced from tubers. Dentin exposure on juvenile teeth could indicate early weaning, or a more abrasive diet than adults which wore away the cementum and enamel coatings, or both. It is also possible juveniles were less capable of removing grit from dug-up food rather than purposefully seeking out more abrasive foods. Technology Oldowan toolkits were uncovered at an excavation site on the Homa Peninsula in western Kenya. Stone tools called "oldowan toolkits" are used to pound and shape other rocks or plant materials. These tools are thought to be between 2.6 and 3 million years old. The stone tools were found near Paranthropus teeth. Bone tools dating between 2.3 and 0.6 mya have been found in abundance in Swartkrans, Kromdraai and Drimolen caves, and are often associated with P. robustus. Though Homo is also known from these caves, their remains are comparatively scarce to Paranthropus, making Homo-attribution unlikely. The tools also cooccur with Homo-associated Oldawan and possibly Acheulian stone tool industries. The bone tools were typically sourced from the shaft of long bones from medium- to large-sized mammals, but tools made sourced from mandibles, ribs and horn cores have also been found. Bone tools have also been found at Oldawan Gorge and directly associated with P. boisei, the youngest dating to 1.34 mya, though a great proportion of other bone tools from here have ambiguous attribution. Stone tools from Kromdraai could possibly be attributed to P. robustus, as no Homo have been found there yet. The bone tools were not manufactured or purposefully shaped for a task. However, since the bones display no weathering (and were not scavenged randomly), and there is a preference displayed for certain bones, raw materials were likely specifically hand-picked. This could indicate a similar cognitive ability to contemporary Stone Age Homo. Bone tools may have been used to cut or process vegetation, or dig up tubers or termites, The form of P. robustus incisors appear to be intermediate between H. erectus and modern humans, which could indicate less food processing done by the teeth due to preparation with simple tools. Burnt bones were also associated with the inhabitants of Swartkrans, which could indicate some of the earliest fire usage. However, these bones were found in Member 3, where Paranthropus remains are rarer than H. erectus, and it is also possible the bones were burned in a wildfire and washed into the cave as it is known the bones were not burned onsite. Social structure Given the marked anatomical and physical differences with modern great apes, there may be no modern analogue for australopithecine societies, so comparisons drawn with modern primates will not be entirely accurate. Paranthropus had pronounced sexual dimorphism, with males notably larger than females, which is commonly correlated with a male-dominated polygamous society. P. robustus may have had a harem society similar to modern forest-dwelling silverback gorillas, where one male has exclusive breeding rights to a group of females, as male-female size disparity is comparable to gorillas (based on facial dimensions), and younger males were less robust than older males (delayed maturity is also exhibited in gorillas). However, if P. robustus preferred a savanna habitat, a multi-male society would have been more productive to better defend the troop from predators in the more exposed environment, much like savanna baboons. Further, among primates, delayed maturity is also exhibited in the rhesus monkey which has a multi-male society, and may not be an accurate indicator of social structure. A 2011 strontium isotope study of P. robustus teeth from the dolomite Sterkfontein Valley found that, like other hominins, but unlike other great apes, P. robustus females were more likely to leave their place of birth (patrilocal). This also discounts the plausibility of a harem society, which would have resulted in a matrilocal society due to heightened male–male competition. Males did not seem to have ventured very far from the valley, which could either indicate small home ranges, or that they preferred dolomitic landscapes due to perhaps cave abundance or factors related to vegetation growth. Life history Dental development seems to have followed about the same timeframe as it does in modern humans and most other hominins, but, since Paranthropus molars are markedly larger, rate of tooth eruption would have been accelerated. Their life history may have mirrored that of gorillas as they have the same brain volume, which (depending on the subspecies) reach physical maturity from 12–18 years and have birthing intervals of 40–70 months. Palaeoecology Habitat It is generally thought that Paranthropus preferred to inhabit wooded, riverine landscapes. The teeth of Paranthropus, H. habilis and H. erectus are all known from various overlapping beds in East Africa, such as at Olduvai Gorge and the Turkana Basin. P. robustus and H. erectus also appear to have coexisted. P. boisei, known from the Great Rift Valley, may have typically inhabited wetlands along lakes and rivers, wooded or arid shrublands, and semiarid woodlands, though their presence in the savanna-dominated Malawian Chiwondo Beds implies they could tolerate a range of habitats. During the Pleistocene, there seem to have been coastal and montane forests in Eastern Africa. More expansive river valleys—namely the Omo River Valley—may have served as important refuges for forest-dwelling creatures. Being cut off from the forests of Central Africa by a savanna corridor, these East African forests would have promoted high rates of endemism, especially during times of climatic volatility. The Cradle of Humankind, the only area P. robustus is known from, was mainly dominated by the springbok Antidorcas recki, but other antelope, giraffes and elephants were also seemingly abundant megafauna. Other known primates are early Homo, the hamadryas baboon, and the extinct colobine monkey Cercopithecoides williamsi. Predators The left foot of a P. boisei specimen (though perhaps actually belonging to H. habilis) from Olduvai Gorge seems to have been bitten off by a crocodile, possibly Crocodylus anthropophagus, and another's leg shows evidence of leopard predation. Other likely Olduvan predators of great apes include the hunting hyena Chasmaporthetes nitidula, and the sabertoothed cats Dinofelis and Megantereon. The carnivore assemblage at the Cradle of Humankind comprises the two sabertooths, and the hyena Lycyaenops silberbergi. Male P. robustus appear to have had a higher mortality rate than females. It is possible that males were more likely to be kicked out of a group, and these lone males had a higher risk of predation. Extinction It was once thought that Paranthropus had become a specialist feeder, and were inferior to the more adaptable tool-producing Homo, leading to their extinction, but this has been called into question. However, smaller brain size may have been a factor in their extinction along with gracile australopithecines. P. boisei may have died out due to an arid trend starting 1.45 mya, causing the retreat of woodlands, and more competition with savanna baboons and Homo for alternative food resources. South African Paranthropus appear to have outlasted their East African counterparts. The youngest record of P. boisei comes from Konso, Ethiopia about 1.4 mya; however, there are no East African sites dated between 1.4 and 1 mya, so it may have persisted until 1 mya. P. robustus, on the other hand, was recorded in Swartkrans until Member 3 dated to 1–0.6 mya (the Middle Pleistocene), though more likely the younger side of the estimate.
Biology and health sciences
Evolution
null
23562
https://en.wikipedia.org/wiki/Perissodactyla
Perissodactyla
Perissodactyla (, ), or odd-toed ungulates, is an order of ungulates. The order includes about 17 living species divided into three families: Equidae (horses, asses, and zebras), Rhinocerotidae (rhinoceroses), and Tapiridae (tapirs). They typically have reduced the weight-bearing toes to three or one of the five original toes, though tapirs retain four toes on their front feet. The nonweight-bearing toes are either present, absent, vestigial, or positioned posteriorly. By contrast, artiodactyls (even-toed ungulates) bear most of their weight equally on four or two (an even number) of the five toes: their third and fourth toes. Another difference between the two is that perissodactyls digest plant cellulose in their intestines, rather than in one or more stomach chambers as artiodactyls, with the exception of Suina, do. The order was considerably more diverse in the past, with notable extinct groups including the brontotheres, palaeotheres, chalicotheres, and the paraceratheres, with the paraceratheres including the largest known land mammals to have ever existed. Despite their very different appearances, they were recognized as related families in the 19th century by the zoologist Richard Owen, who also coined the order's name. Anatomy The largest odd-toed ungulates are rhinoceroses, and the extinct Paraceratherium, a hornless rhino from the Oligocene, is considered one of the largest land mammals of all time. At the other extreme, an early member of the order, the prehistoric horse Eohippus, had a withers height of only . Apart from dwarf varieties of the domestic horse and donkey, living perissodactyls reach a body length of and a weight of . While rhinos have only sparse hair and exhibit a thick epidermis, tapirs and horses have dense, short coats. Most species are grey or brown, although zebras and young tapirs are striped. Limbs The main axes of both the front and rear feet pass through the third toe, which is always the largest. The remaining toes have been reduced in size to varying degrees. Tapirs, which are adapted to walking on soft ground, have four toes on their fore feet and three on their hind feet. Living rhinos have three toes on both the front and hind feet. Modern equines possess only a single toe; however, their feet are equipped with hooves, which almost completely cover the toe. Rhinos and tapirs, by contrast, have hooves covering only the leading edge of the toes, with the bottom being soft. Ungulates have stances that require them to stand on the tips of their toes. Equine ungulates with only one digit or hoof have decreased mobility in their limbs, which allows for faster running speeds and agility. Differences in limb structure and physiology between ungulates and other mammals can be seen in the shape of the humerus. For example, often shorter, thicker, bones belong to the largest and heaviest ungulates like the rhinoceros. The ulnae and fibulae are reduced in horses. A common feature that clearly distinguishes this group from other mammals is the articulation between the astragalus, the scaphoid and the cuboid, which greatly restricts the mobility of the foot. The thigh is relatively short, and the clavicle is absent. Skull and teeth Odd-toed ungulates have a long upper jaw with an extended diastema between the front and cheek teeth, giving them an elongated head. The various forms of snout between families are due to differences in the form of the premaxilla. The lacrimal bone has projecting cusps in the eye sockets and a wide contact with the nasal bone. The temporomandibular joint is high and the mandible is enlarged. Rhinos have one or two horns made of agglutinated keratin, unlike the horns of even-toed ungulates (Bovidae and pronghorn), which have a bony core. The number and form of the teeth vary according to diet. The incisors and canines can be very small or completely absent, as in the two African species of rhinoceros. In horses, usually only the males possess canines. The surface shape and height of the molars is heavily dependent on whether soft leaves or hard grass make up the main component of their diets. Three or four cheek teeth are present on each jaw half, so the dental formula of odd-toed ungulates is: The guttural pouch, a small outpocketing of the auditory tube that drains the middle ear, is a characteristic feature of Perissodactyla. The guttural pouch is of particular concern in equine veterinary practice, due to its frequent involvement in some serious infections. Aspergillosis (infection with Aspergillus mould) of the guttural pouch (also called guttural pouch mycosis) can cause serious damage to the tissues of the pouch, as well as surrounding structures including important cranial nerves (nerves IX-XII: glossopharyngeal, vagus, accessory and hypoglossal nerves) and the internal carotid artery. Strangles (Streptococcus equi equi infection) is a highly transmissible respiratory infection of horses that can cause pus to accumulate in the guttural pouch; horses with S. equi equi colonising their guttural pouch can continue to intermittently shed the bacteria for several months, and should be isolated from other horses during this time to prevent transmission. Due to the intermittent nature of S. equi equi shedding, prematurely reintroducing an infected horse may risk exposing other horses to the infection, even though the shedding horse appears well and may have previously returned negative samples. The function of the guttural pouch has been difficult to determine, but it is now believed to play a role in cooling blood in the internal carotid artery before it enters the brain. Gut All perissodactyls are hindgut fermenters. In contrast to ruminants, hindgut fermenters store digested food that has left the stomach in an enlarged cecum, where the food begins digestion by microbes, with the fermentation continuing in the large colon. No gallbladder is present. The stomach of perissodactyls is simply built, while the cecum accommodates up to in horses. The small intestine is very long, reaching up to in horses. Extraction of nutrients from food is relatively inefficient, which probably explains why no odd-toed ungulates are small; nutritional requirements per unit of body weight are lower for large animals, as their surface-area-to-volume ratio is smaller. Lack of carotid rete Unlike artiodactyls, perissodactyls lack a carotid rete, a heat exchange that reduces the dependence of the temperature of the brain on that of the body. As a result, perissodactyls have limited thermoregulatory flexibility compared to artiodactyls which has restricted them to habitats of low seasonality and rich in food and water, such as tropical forests. In contrast, artiodactyls occupy a wide range of habits ranging from the Arctic Circle to deserts and tropical savannahs. Distribution Most extant perissodactyl species occupy a small fraction of their original range. Members of this group are now found only in Central and South America, eastern and southern Africa, and central, southern, and southeastern Asia. During the peak of odd-toed ungulate existence, from the Eocene to the Oligocene, perissodactyls were distributed over much of the globe, the major exceptions being Australia and Antarctica. Horses and tapirs arrived in South America after the formation of the Isthmus of Panama around 3 million years ago in the Pliocene. Their North American counterparts died out around 10,000 years ago, leaving only Baird's tapir with a range extending to what is now southern Mexico. The tarpans were pushed to extinction in 19th century Europe. Hunting and habitat destruction have reduced the surviving perissodactyl species to fragmented populations. In contrast, domesticated horses and donkeys have gained a worldwide distribution, and feral animals of both species are now also found in regions outside their original range, such as in Australia. Lifestyle and diet Perissodactyls inhabit a number of different habitats, leading to different lifestyles. Tapirs are solitary and inhabit mainly tropical rainforests. Rhinos tend to live alone in rather dry savannas, and in Asia, wet marsh or forest areas. Horses inhabit open areas such as grasslands, steppes, or semi-deserts, and live together in groups. Odd-toed ungulates are exclusively herbivores that feed, to varying degrees, on grass, leaves, and other plant parts. A distinction is often made between primarily grass feeders (white rhinos, equines) and leaf feeders (tapirs, other rhinos). Reproduction and development Odd-toed ungulates are characterized by a long gestation period and a small litter size, usually delivering a single young. The gestation period is 330–500 days, being longest in rhinos. Newborn perissodactyls are precocial, meaning offspring are born already quite independent: for example, young horses can begin to follow the mother after a few hours. The young are nursed for a relatively long time, often into their second year, with rhinos reaching sexual maturity around eight or ten years old, but horses and tapirs maturing around two to four years old. Perissodactyls are long-lived, with several species, such as rhinos, reaching an age of almost 50 years in captivity. Taxonomy Outer taxonomy Traditionally, the odd-toed ungulates were classified with other mammals such as artiodactyls, hyraxes, elephants and other "ungulates". A close family relationship with hyraxes was suspected based on similarities in the construction of the ear and the course of the carotid artery. Molecular genetic studies, however, have shown the ungulates to be polyphyletic, meaning that in some cases the similarities are the result of convergent evolution rather than common ancestry. Elephants and hyraxes are now considered to belong to Afrotheria, so are not closely related to the perissodactyls. These in turn are in the Laurasiatheria, a superorder that had its origin in the former supercontinent Laurasia. Molecular genetic findings suggest that the cloven Artiodactyla (containing the cetaceans as a deeply nested subclade) are the sister taxon of the Perissodactyla; together, the two groups form the Euungulata. More distant are the bats (Chiroptera) and Ferae (a common taxon of carnivorans, Carnivora, and pangolins, Pholidota). In a discredited alternative scenario, a close relationship exists between perissodactyls, carnivorans, and bats, this assembly comprising the Pegasoferae. According to studies published in March 2015, odd-toed ungulates are in a close family relationship with at least some of the so-called Meridiungulata, a very diverse group of mammals living from the Paleocene to the Pleistocene in South America, whose systematic unity is largely unexplained. Some of these were classified based on their paleogeographic distribution. However, a close relationship can be worked out to perissodactyls by protein sequencing and comparison with fossil collagen from remnants of phylogenetically young members of the Meridiungulata (specifically Macrauchenia from the Litopterna and Toxodon from the Notoungulata). Both kinship groups, the odd-toed ungulates and the Litopterna-Notoungulata, are now in the higher-level taxon of Panperissodactyla. This kinship group is included among the Euungulata, which also contains the even-toed ungulates (Artiodactyla). The separation of the Litopterna-Notoungulata group from the perissodactyls probably took place before the Cretaceous–Paleogene extinction event. "Condylarths" can probably be considered the starting point for the development of the two groups, as they represent a heterogeneous group of primitive ungulates that mainly inhabited the northern hemisphere in the Paleogene. Modern members Odd-toed ungulates comprise three living families with around 17 species—in horses, however, the exact count is still controversial. Rhinos and tapirs are more closely related to each other than to horses. The separation of horses from other perissodactyls took place according to molecular genetic analysis in the Paleocene some 56 million years ago, while the rhinos and tapirs split off in the lower-middle Eocene, about 47 million years ago. Order Perissodactyla Suborder Hippomorpha Family Equidae: horses and allies, seven species in one genus Wild horse, Equus ferus Tarpan, †Equus ferus ferus Przewalski's horse, Equus ferus przewalskii Domestic horse, Equus ferus caballus African wild ass, Equus africanus Nubian wild ass, Equus africanus africanus Somali wild ass, Equus africanus somaliensis Domesticated ass (donkey), Equus africanus asinus Atlas wild ass, †Equus africanus atlanticus Onager or Asiatic wild ass, Equus hemionus Mongolian wild ass, Equus hemionus hemionus Turkmenian kulan, Equus hemionus kulan Persian onager, Equus hemionus onager Indian wild ass, Equus hemionus khur Syrian wild ass, †Equus hemionus hemippus Kiang or Tibetan wild ass, Equus kiang Western kiang, Equus kiang kiang Eastern kiang, Equus kiang holdereri Southern kiang, Equus kiang polyodon Plains zebra, Equus quagga Quagga, †Equus quagga quagga Burchell's zebra, Equus quagga burchellii Grant's zebra, Equus quagga boehmi Maneless zebra, Equus quagga borensis Chapman's zebra, Equus quagga chapmani Crawshay's zebra, Equus quagga crawshayi Selous' zebra, Equus quagga selousi Mountain zebra, Equus zebra Cape mountain zebra, Equus zebra zebra Hartmann's mountain zebra, Equus zebra hartmannae Grévy's zebra, Equus grevyi Suborder Ceratomorpha Family Tapiridae: tapirs, five species in one genus Brazilian tapir, Tapirus terrestris Mountain tapir, Tapirus pinchaque Baird's tapir, Tapirus bairdii Malayan tapir, Tapirus indicus Kabomani tapir, Tapirus kabomani Family Rhinocerotidae: rhinoceroses, five species in four genera Black rhinoceros, Diceros bicornis Southern black rhinoceros, †Diceros bicornis bicornis North-eastern black rhinoceros, †Diceros bicornis brucii Chobe black rhinoceros, Diceros bicornis chobiensis Uganda black rhinoceros, Diceros bicornis ladoensis Western black rhinoceros, †Diceros bicornis longipes Eastern black rhinoceros, Diceros bicornis michaeli South-central black rhinoceros, Diceros bicornis minor South-western black rhinoceros, Diceros bicornis occidentalis White rhinoceros, Ceratotherium simum Southern white rhinoceros, Ceratotherium simum simum Northern white rhinoceros, Ceratotherium simum cottoni Indian rhinoceros, Rhinoceros unicornis Javan rhinoceros, Rhinoceros sondaicus Indonesian Javan rhinoceros, Rhinoceros sondaicus sondaicus Vietnamese Javan rhinoceros, Rhinoceros sondaicus annamiticus Indian Javan rhinoceros, †Rhinoceros sondaicus inermis Sumatran rhinoceros, Dicerorhinus sumatrensis Western Sumatran rhinoceros, Dicerorhinus sumatrensis sumatrensis Eastern Sumatran rhinoceros, Dicerorhinus sumatrensis harrissoni Northern Sumatran rhinoceros, †Dicerorhinus sumatrensis lasiotis Prehistoric members There are many perissodactyl fossils of multivariant form. The major lines of development include the following groups: Brontotherioidea were among the earliest known large mammals, consisting of the families of Brontotheriidae (synonym Titanotheriidae), the most well-known representative being Megacerops and the more basal family Lambdotheriidae. They were generally characterized in their late phase by a bony horn at the transition from the nose to the frontal bone and flat molars suitable for chewing soft plant food. The Brontotheroidea, which were almost exclusively confined to North America and Asia, died out at the beginning of the Upper Eocene. Equoidea also developed in the Eocene. Palaeotheriidae are known mainly from Europe. In contrast, the horse family (Equidae) flourished and spread. Over time this group saw a reduction in toe number, extension of the limbs, and the progressive adjustment of the teeth for eating hard grasses. Chalicotherioidea represented another characteristic group, consisting of the families Chalicotheriidae and Lophiodontidae. The Chalicotheriidae developed claws instead of hooves and considerable extension of the forelegs. The best-known genera include Chalicotherium and Moropus. Chalicotherioidea died out in the Pleistocene. Rhinocerotoidea (rhino relatives) included a large variety of forms from the Eocene up to the Oligocene, including dog-size leaf feeders, semiaquatic animals, and also huge long-necked animals. Only a few had horns on the nose. The Amynodontidae were hippo-like, aquatic animals. Hyracodontidae developed long limbs and long necks that were most pronounced in the Paraceratherium (formerly known as Baluchitherium or Indricotherium), the second largest known land mammal ever to have lived (after Palaeoloxodon namadicus). The rhinos (Rhinocerotidae) emerged in the Middle Eocene; five species survive to the present day. Tapiroidea reached their greatest diversity in the Eocene, when several families lived in Eurasia and North America. They retained a primitive physique and were noted for developing a trunk. The extinct families within this group include the Helaletidae. Several mammal groups traditionally classified as condylarths, long-understood to be a wastebasket taxon, such as hyopsodontids and phenacodontids, are now understood to be part of the odd-toed ungulate assemblage. Phenacodontids seem to be stem-perissodactyls, while hyopsodontids are closely related to horses and brontotheres, despite their more primitive overall appearance. Desmostylia and Anthracobunidae have traditionally been placed among the afrotheres, but they may actually represent stem-perissodactyls. They are an early lineage of mammals that took to the water, spreading across semi-aquatic to fully marine niches in the Tethys Ocean and the northern Pacific. However, later studies have shown that, while anthracobunids are definite perissodactyls, desmostylians have enough mixed characters to suggest that a position among the Afrotheria is not out of the question. Order Perissodactyla Superfamily Brontotherioidea †Brontotheriidae Suborder Hippomorpha †Hyopsodontidae †Pachynolophidae Superfamily Equoidea †Indolophidae †Palaeotheriidae (might be a basal perissodactyl grade instead) Clade Tapiromorpha †Isectolophidae (a basal family of Tapiromorpha; from the Eocene epoch) †Suborder Ancylopoda †Lophiodontidae Superfamily Chalicotherioidea †Eomoropidae (basal grade of chalicotheroids) †Chalicotheriidae Suborder Ceratomorpha Superfamily Rhinocerotoidea †Amynodontidae †Hyracodontidae Superfamily Tapiroidea †Deperetellidae †Rhodopagidae (sometimes recognized as a subfamily of deperetellids) †Lophialetidae †Eoletidae (sometimes recognized as a subfamily of lophialetids) †Anthracobunidae (a family of stem-perissodactyls; from the Early to Middle Eocene epoch) †Phenacodontidae (a clade of stem-perissodactyls; from the Early Palaeocene to the Middle Eocene epoch) Higher classification of perissodactyls Relationships within the large group of odd-toed ungulates are not fully understood. Initially, after the establishment of "Perissodactyla" by Richard Owen in 1848, the present-day representatives were considered equal in rank. In the first half of the 20th century, a more systematic differentiation of odd-toed ungulates began, based on a consideration of fossil forms, and they were placed in two major suborders: Hippomorpha and Ceratomorpha. The Hippomorpha comprises today's horses and their extinct members (Equoidea); the Ceratomorpha consist of tapirs and rhinos plus their extinct members (Tapiroidea and Rhinocerotoidea). The names Hippomorpha and Ceratomorpha were introduced in 1937 by Horace Elmer Wood, in response to criticism of the name "Solidungula" that he proposed three years previously. It had been based on the grouping of horses and Tridactyla and on the rhinoceros/tapir complex. The extinct brontotheriidae were also classified under Hippomorpha and therefore possess a close relationship to horses. Some researchers accept this assignment because of similar dental features, but there is also the view that a very basal position within the odd-toed ungulates places them rather in the group of Titanotheriomorpha. Originally, the Chalicotheriidae were seen as members of Hippomorpha, and presented as such in 1941. William Berryman Scott thought that, as claw-bearing perissodactyls, they belong in the new suborder Ancylopoda (where Ceratomorpha and Hippomorpha as odd-toed ungulates were combined in the group of Chelopoda). The term Ancylopoda, coined by Edward Drinker Cope in 1889, had been established for chalicotheres. However, further morphological studies from the 1960s showed a middle position of Ancylopoda between Hippomorpha and Ceratomorpha. Leonard Burton Radinsky saw all three major groups of odd-toed ungulates as peers, based on the extremely long and independent phylogenetic development of the three lines. In the 1980s, Jeremy J. Hooker saw a general similarity between Ancylopoda and Ceratomorpha based on dentition, especially in the earliest members, leading to the unification in 1984 of the two submissions in the interim order, Tapiromorpha. At the same time, he expanded the Ancylopoda to include the Lophiodontidae. The name "Tapiromorpha" goes back to Ernst Haeckel, who coined it in 1873, but it was long considered synonymous to Ceratomorpha because Wood had not considered it in 1937 when Ceratomorpha were named, since the term had been used quite differently in the past. Also in 1984, Robert M. Schoch used the conceptually similar term Moropomorpha, which today applies synonymously to Tapiromorpha. Included within the Tapiromorpha are the now extinct Isectolophidae, a sister group of the Ancylopoda-Ceratomorpha group and thus the most primitive members of this relationship complex. Evolutionary history Origins The evolutionary development of Perissodactyla is well documented in the fossil record. Numerous finds are evidence of the adaptive radiation of this group, which was once much more varied and widely dispersed. Radinskya from the late Paleocene of East Asia is often considered to be one of the oldest close relatives of the ungulates. Its 8 cm skull must have belonged to a very small and primitive animal with a π-shaped crown pattern on the enamel of its rear molars similar to that of perissodactyls and their relatives, especially the rhinos. Finds of Cambaytherium and Kalitherium in the Cambay shale of western India indicate an origin in Asia dating to the Lower Eocene roughly 54.5 million years ago. Their teeth also show similarities to Radinskya as well as to the Tethytheria clade. The saddle-shaped configuration of the navicular joints and the mesaxonic construction of the front and hind feet also indicates a close relationship to Tethytheria. However, this construction deviates from that of Cambaytherium, indicating that it is actually a member of a sister group. Ancestors of Perissodactyla may have arrived via an island bridge from the Afro-Arab landmass onto the Indian subcontinent as it drifted north towards Asia. A study on Cambaytherium suggests an origin in India prior or near its collision with Asia. The alignment of hyopsodontids and phenacodontids to Perissodactyla in general suggests an older Laurasian origin and distribution for the clade, dispersed across the northern continents already in the early Paleocene. These forms already show a fairly well-developed molar morphology, with no intermediary forms as evidence of the course of its development. The close relationship between meridiungulate mammals and perissoodactyls in particular is of interest since the latter appeared in South America soon after the K–T event, implying rapid ecological radiation and dispersal after mass extinction. Phylogeny The Perissodactyla appeared relatively abruptly at the beginning of the Lower Paleocene about 63 million years ago, both in North America and Asia, in the form of phenacodontids and hyopsodontids. The oldest finds from an extant group originate among other sources, from Sifrhippus, an ancestor of the horses from the Willswood lineup in northwestern Wyoming. The distant ancestors of tapirs appeared not too long after that in the Ghazij lineup in Balochistan, such as Ganderalophus, as well as Litolophus from the Chalicotheriidae line, or Eotitanops from the group of brontotheriidae. Initially, the members of the different lineages looked quite similar, with an arched back and generally four toes on the front and three on the hind feet. Eohippus, which is considered a member of the horse family, outwardly resembled Hyrachyus, the first representative of the rhino and tapir line. All were small compared to later forms and lived as fruit and foliage eaters in forests. The first of the megafauna to emerge were the brontotheres, in the Middle and Upper Eocene. Megacerops, known from North America, reached a withers height of and could have weighed just over . The decline of brontotheres at the end of the Eocene is associated with competition arising from the advent of more successful herbivores. More successful lines of odd-toed ungulates emerged at the end of the Eocene when dense jungles gave way to steppe, such as the chalicotheriid rhinos, and their immediate relatives; their development also began with very small forms. Paraceratherium, one of the largest mammals ever to walk the earth, evolved during this era. They weighed up to and lived throughout the Oligocene in Eurasia. About 20 million years ago, at the onset of the Miocene, the perissodactyls first reached Africa when it became connected to Eurasia because of the closing of the Tethys Ocean. For the same reason, however, new animals such as the mammoths also entered the ancient settlement areas of odd-toed ungulates, creating competition that led to the extinction of some of their lines. The rise of ruminants, which occupied similar ecological niches and had a much more efficient digestive system, is also associated with the decline in diversity of odd-toed ungulates. A significant cause for the decline of perissodactyls was climate change during the Miocene, leading to a cooler and drier climate accompanied by the spread of open landscapes. However, some lines flourished, such as the horses and rhinos; anatomical adaptations made it possible for them to consume tougher grass food. This led to open land forms that dominated newly created landscapes. With the emergence of the Isthmus of Panama in the Pliocene, perissodactyls and other megafauna were given access to one of their last habitable continents: South America. However, many perissodactyls became extinct at the end of the ice ages, including American horses and the Elasmotherium. Whether over-hunting by humans (overkill hypothesis), climatic change, or a combination of both factors was responsible for the extinction of ice age mega-fauna, remains controversial. Research history In 1758, in his seminal work Systema Naturae, Linnaeus (1707–1778) classified horses (Equus) together with hippos (Hippopotamus). At that time, this category also included the tapirs (Tapirus), more precisely the lowland or South American tapir (Tapirus terrestus), the only tapir then known in Europe. Linnaeus classified this tapir as Hippopotamus terrestris and put both genera in the group of the Belluae ("beasts"). He combined the rhinos with the Glires, a group now consisting of the lagomorphs and rodents. Mathurin Jacques Brisson (1723–1806) first separated the tapirs and hippos in 1762 with the introduction of the concept le tapir. He also separated the rhinos from the rodents, but did not combine the three families now known as the odd-toed ungulates. In the transition to the 19th century, the individual perissodactyl genera were associated with various other groups, such as the proboscidean and even-toed ungulates. In 1795, Étienne Geoffroy Saint-Hilaire (1772–1844) and Georges Cuvier (1769–1832) introduced the term "pachyderm" (Pachydermata), including in it not only the rhinos and elephants, but also the hippos, pigs, peccaries, tapirs and hyrax. The horses were still generally regarded as a group separate from other mammals and were often classified under the name Solidungula or Solipèdes, meaning "one-hoof animal". In 1861, Henri Marie Ducrotay de Blainville (1777–1850) classified ungulates by the structure of their feet, differentiating those with an even number of toes from those with an odd number. He moved the horses as solidungulate over to the tapirs and rhinos as multungulate animals and referred to all of them together as onguligrades à doigts impairs, coming close to the concept of the odd-toed ungulate as a systematic unit. Richard Owen (1804–1892) quoted Blainville in his study on fossil mammals of the Isle of Wight and introduced the name Perissodactyla. In 1884, Othniel Charles Marsh (1831–1899) came up with the concept Mesaxonia, which he used for what are today called the odd-toed ungulates, including their extinct relatives, but explicitly excluding the hyrax. Mesaxonia is now considered a synonym of Perissodactyla, but it was sometimes also used for the true odd-toed ungulates as a subcategory (rhinos, horses, tapirs), while Perissodactyla stood for the entire order, including the hyrax. The assumption that hyraxes were Perissodactyla was held well into the 20th century. Only with the advent of molecular genetic research methods had it been recognized that the hyrax was not closely related to perissodactyls but rather to elephants and manatees. Interactions with humans The domestic horse and the donkey play an important role in human history, particularly as transport, work and pack animals. The domestication of both species began several millennia BCE. Due to the motorisation of agriculture and the spread of automobile traffic, such use has declined sharply in Western industrial countries; riding is usually undertaken more as a hobby or sport. In less developed regions of the world, traditional uses for these animals are, however, still widespread. To a lesser extent, horses and donkeys are also kept for their meat and their milk. In contrast, the existence in the wild of almost all other odd-toed ungulates species has declined dramatically because of hunting and habitat destruction. The quagga is extinct and Przewalski's horse was once eradicated in the wild. Present threat levels, according to the International Union for Conservation of Nature (2012): Four species are considered critically endangered: the Javan rhinoceros, the Sumatran rhinoceros, the black rhinoceros and the African wild ass. Six species are endangered: the mountain tapir, the Central American tapir, the Malayan tapir, the wild horse and Grévy's zebra. Three species are considered vulnerable: the Indian rhinoceros, the South American tapir and the mountain zebra. The onager, the plains zebra and the white rhinoceros are near-threatened; however, the northern subspecies, Ceratotherium simum cottoni (northern white rhinoceros) is close to extinction. The kiang is not considered at risk (least concern). Conservation Hunting and habitat loss due to land conversion and human encroachment are the most significant threats to the three endangered species of tapir. The Malayan tapir's inland forest habitat is of particular concern, as this land is being deforested rapidly and converted into palm oil plantations. Climate change is shifting the suitable range of mountain tapirs further up the Andes Mountains, reducing their available habitat. Hunting of mountain and Baird's tapirs in Central and South America for their meat is common and is made easier by climate change, as population densities are forcibly increased. Although hunting is illegal in protected areas throughout this region, regulations are often ignored or unenforced. Conservation efforts for tapirs primarily consist of legal protections from hunting and international trade, though proposals of habitat protection and restoration at the local level are underway in all affected countries.
Biology and health sciences
Perissodactyla
null
23572
https://en.wikipedia.org/wiki/Partially%20ordered%20set
Partially ordered set
In mathematics, especially order theory, a partial order on a set is an arrangement such that, for certain pairs of elements, one precedes the other. The word partial is used to indicate that not every pair of elements needs to be comparable; that is, there may be pairs for which neither element precedes the other. Partial orders thus generalize total orders, in which every pair is comparable. Formally, a partial order is a homogeneous binary relation that is reflexive, antisymmetric, and transitive. A partially ordered set (poset for short) is an ordered pair consisting of a set (called the ground set of ) and a partial order on . When the meaning is clear from context and there is no ambiguity about the partial order, the set itself is sometimes called a poset. Partial order relations The term partial order usually refers to the reflexive partial order relations, referred to in this article as non-strict partial orders. However some authors use the term for the other common type of partial order relations, the irreflexive partial order relations, also called strict partial orders. Strict and non-strict partial orders can be put into a one-to-one correspondence, so for every strict partial order there is a unique corresponding non-strict partial order, and vice versa. Partial orders A reflexive, weak, or , commonly referred to simply as a partial order, is a homogeneous relation ≤ on a set that is reflexive, antisymmetric, and transitive. That is, for all it must satisfy: Reflexivity: , i.e. every element is related to itself. Antisymmetry: if and then , i.e. no two distinct elements precede each other. Transitivity: if and then . A non-strict partial order is also known as an antisymmetric preorder. Strict partial orders An irreflexive, strong, or is a homogeneous relation < on a set that is irreflexive, asymmetric and transitive; that is, it satisfies the following conditions for all Irreflexivity: , i.e. no element is related to itself (also called anti-reflexive). Asymmetry: if then not . Transitivity: if and then . A transitive relation is asymmetric if and only if it is irreflexive. So the definition is the same if it omits either irreflexivity or asymmetry (but not both). A strict partial order is also known as an asymmetric strict preorder. Correspondence of strict and non-strict partial order relations Strict and non-strict partial orders on a set are closely related. A non-strict partial order may be converted to a strict partial order by removing all relationships of the form that is, the strict partial order is the set where is the identity relation on and denotes set subtraction. Conversely, a strict partial order < on may be converted to a non-strict partial order by adjoining all relationships of that form; that is, is a non-strict partial order. Thus, if is a non-strict partial order, then the corresponding strict partial order < is the irreflexive kernel given by Conversely, if < is a strict partial order, then the corresponding non-strict partial order is the reflexive closure given by: Dual orders The dual (or opposite) of a partial order relation is defined by letting be the converse relation of , i.e. if and only if . The dual of a non-strict partial order is a non-strict partial order, and the dual of a strict partial order is a strict partial order. The dual of a dual of a relation is the original relation. Notation Given a set and a partial order relation, typically the non-strict partial order , we may uniquely extend our notation to define four partial order relations and , where is a non-strict partial order relation on , is the associated strict partial order relation on (the irreflexive kernel of ), is the dual of , and is the dual of . Strictly speaking, the term partially ordered set refers to a set with all of these relations defined appropriately. But practically, one need only consider a single relation, or , or, in rare instances, the non-strict and strict relations together, . The term ordered set is sometimes used as a shorthand for partially ordered set, as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as "ordered sets", especially in areas where these structures are more common than posets. Some authors use different symbols than such as or to distinguish partial orders from total orders. When referring to partial orders, should not be taken as the complement of . The relation is the converse of the irreflexive kernel of , which is always a subset of the complement of , but is equal to the complement of if, and only if, is a total order. Alternative definitions Another way of defining a partial order, found in computer science, is via a notion of comparison. Specifically, given as defined previously, it can be observed that two elements x and y may stand in any of four mutually exclusive relationships to each other: either , or , or , or x and y are incomparable. This can be represented by a function that returns one of four codes when given two elements. This definition is equivalent to a partial order on a setoid, where equality is taken to be a defined equivalence relation rather than set equality. Wallis defines a more general notion of a partial order relation as any homogeneous relation that is transitive and antisymmetric. This includes both reflexive and irreflexive partial orders as subtypes. A finite poset can be visualized through its Hasse diagram. Specifically, taking a strict partial order relation , a directed acyclic graph (DAG) may be constructed by taking each element of to be a node and each element of to be an edge. The transitive reduction of this DAG is then the Hasse diagram. Similarly this process can be reversed to construct strict partial orders from certain DAGs. In contrast, the graph associated to a non-strict partial order has self-loops at every node and therefore is not a DAG; when a non-strict order is said to be depicted by a Hasse diagram, actually the corresponding strict order is shown. Examples Standard examples of posets arising in mathematics include: The real numbers, or in general any totally ordered set, ordered by the standard less-than-or-equal relation ≤, is a partial order. On the real numbers , the usual less than relation < is a strict partial order. The same is also true of the usual greater than relation > on . By definition, every strict weak order is a strict partial order. The set of subsets of a given set (its power set) ordered by inclusion (see Fig. 1). Similarly, the set of sequences ordered by subsequence, and the set of strings ordered by substring. The set of natural numbers equipped with the relation of divisibility. (see Fig. 3 and Fig. 6) The vertex set of a directed acyclic graph ordered by reachability. The set of subspaces of a vector space ordered by inclusion. For a partially ordered set P, the sequence space containing all sequences of elements from P, where sequence a precedes sequence b if every item in a precedes the corresponding item in b. Formally, if and only if for all ; that is, a componentwise order. For a set X and a partially ordered set P, the function space containing all functions from X to P, where if and only if for all A fence, a partially ordered set defined by an alternating sequence of order relations The set of events in special relativity and, in most cases, general relativity, where for two events X and Y, if and only if Y is in the future light cone of X. An event Y can be causally affected by X only if . One familiar example of a partially ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, with neither being a descendant of the other. Orders on the Cartesian product of partially ordered sets In order of increasing strength, i.e., decreasing sets of pairs, three of the possible partial orders on the Cartesian product of two partially ordered sets are (see Fig. 4): the lexicographical order:   if or ( and ); the product order:   (a, b) ≤ (c, d) if a ≤ c and b ≤ d; the reflexive closure of the direct product of the corresponding strict orders:   if ( and ) or ( and ). All three can similarly be defined for the Cartesian product of more than two sets. Applied to ordered vector spaces over the same field, the result is in each case also an ordered vector space.
Mathematics
Order theory
null
23576
https://en.wikipedia.org/wiki/Tetraodontidae
Tetraodontidae
Tetraodontidae is a family of primarily marine and estuarine fish of the order Tetraodontiformes. The family includes many familiar species variously called pufferfish, puffers, balloonfish, blowfish, blowers, blowies, bubblefish, globefish, swellfish, toadfish, toadies, toadle, honey toads, sugar toads, and sea squab. They are morphologically similar to the closely related porcupinefish, which have large external spines (unlike the thinner, hidden spines of the Tetraodontidae, which are only visible when the fish have puffed up). The majority of pufferfish species are toxic, with some among the most poisonous vertebrates in the world. In certain species, the internal organs, such as the liver, and sometimes the skin, contain tetrodotoxin, and are highly toxic to most animals when eaten; nevertheless, the meat of some species is considered a delicacy in Japan (as 河豚, pronounced fugu), Korea (as 복, bok, or 복어, bogeo), and China (as 河豚, hétún) when prepared by specially trained chefs who know which part is safe to eat and in what quantity. Other pufferfish species with nontoxic flesh, such as the northern puffer, Sphoeroides maculatus, of Chesapeake Bay, are considered a delicacy elsewhere. The species Torquigener albomaculosus was described by David Attenborough as "the greatest artist of the animal kingdom" due to the males' unique habit of wooing females by creating nests in sand composed of complex geometric designs. Taxonomy The family name comes from the name of its type genus Tetraodon; it is traced from the Greek words tetra meaning "four" and odoús meaning "teeth". Genera The Tetraodontidae contains 193 to 206 species of puffers in 27 or 28 genera: Amblyrhynchotes Troschel, 1856 Arothron Müller, 1841 Auriglobus Kottelat, 1999 Canthigaster Swainson, 1839 Carinotetraodon Benl, 1957 Chelonodon Müller, 1841 Chonerhinos Bleeker, 1854 Colomesus Gill, 1884 Contusus Whitley, 1947 Dichotomyctere Duméril, 1855 Ephippion Bibron, 1855 Feroxodon Su, Hardy et Tyler, 1986 Guentheridia Gilbert et Starks, 1904 Javichthys Hardy, 1985 Leiodon Swainson, 1839 Lagocephalus Swainson, 1839 Marilyna Hardy, 1982 Omegophora Whitley, 1934 Pelagocephalus Tyler & Paxton, 1979 Polyspina Hardy, 1983 Pao Kottelat, 2013 Reicheltia Hardy, 1982 Sphoeroides Anonymous, 1798 Takifugu Abe, 1949 Tetractenos Hardy, 1983 Tetraodon Linnaeus, 1758 Torquigener Whitley, 1930 Tylerius Hardy, 1984 Morphology Pufferfish are typically small to medium in size, although a few species such as the Mbu pufferfish can reach lengths greater than . Tetraodontiformes, or pufferfish, are most significantly characterized by the beak-like four teeth – hence the name combining the Greek terms "tetra" for four and "odous" for tooth. Each of the top and bottom arches is fused together with a visible midsagittal demarcation, which are used to break apart and consume small crustaceans. The lack of ribs, a pelvis, and pelvic fins are also unique to pufferfish. The notably missing bone and fin features are due to the pufferfish' specialized defense mechanism, expanding by sucking in water through an oral cavity. Pufferfish can also have many varied structures of caltrop-like dermal spines, which account for the replacement of typical fish scales, and can range in coverage extent from the entire body, to leaving the frontal surface empty. Tetraodontidae typically have smaller spines than the sister family Diodontidae, with some spines not being visible until inflation. Distribution They are most diverse in the tropics, relatively uncommon in the temperate zone, and completely absent from cold waters. Ecology and life history Most pufferfish species live in marine or brackish waters, but some can enter fresh water. About 35 species spend their entire lifecycles in fresh water. These freshwater species are found in disjunct tropical regions of South America (Colomesus asellus and Colomesus tocantinensis), Africa (six Tetraodon species), and Southeast Asia (Auriglobus, Carinotetraodon, Dichotomyctere, Leiodon and Pao). Natural defenses The puffer's unique and distinctive natural defenses help compensate for its slow locomotion. It moves by combining pectoral, dorsal, anal, and caudal fin motions. This makes it highly maneuverable, but very slow, so a comparatively easy predation target. Its tail fin is mainly used as a rudder, but it can be used for a sudden evasive burst of speed that shows none of the care and precision of its usual movements. The puffer's excellent eyesight, combined with this speed burst, is the first and most important defense against predators. The pufferfish's secondary defense mechanism, used if successfully pursued, is to fill its extremely elastic stomach with water (or air when outside the water) until it is much larger and almost spherical in shape. Even if they are not visible when the puffer is not inflated, all puffers have pointed spines, so a hungry predator may suddenly find itself facing an unpalatable, pointy ball rather than a slow, easy meal. Predators that do not heed this warning (or are "lucky" enough to catch the puffer suddenly, before or during inflation) may die from choking, and predators that do manage to swallow the puffer may find their stomachs full of tetrodotoxin (TTX), making puffers an unpleasant, possibly lethal, choice of prey. This neurotoxin is found primarily in the ovaries and liver, although smaller amounts exist in the intestines and skin, as well as trace amounts in muscle. It does not always have a lethal effect on large predators, such as sharks, but it can kill humans. Larval pufferfish are chemically defended by the presence of TTX on the surface of skin, which causes predators to spit them out. Not all puffers are necessarily poisonous; the flesh of the northern puffer is not toxic (a level of poison can be found in its viscera) and it is considered a delicacy in North America. Toxin level varies widely even in fish that are poisonous. A puffer's neurotoxin is not necessarily as toxic to other animals as it is to humans, and puffers are eaten routinely by some species of fish, such as lizardfish and sharks. Puffers are able to move their eyes independently, and many species can change the color or intensity of their patterns in response to environmental changes. In these respects, they are somewhat similar to the terrestrial chameleon. Although most puffers are drab, many have bright colors and distinctive markings, and make no attempt to hide from predators. This is likely an example of honestly signaled aposematism. Dolphins have been filmed expertly handling pufferfish amongst themselves in an apparent attempt to get intoxicated or enter a trance-like state. Reproduction Many marine puffers have a pelagic, or open-ocean, life stage. Spawning occurs after males slowly push females to the water surface or join females already present. The eggs are spherical and buoyant. Hatching occurs after roughly four days. The fry are tiny, but under magnification have a shape usually reminiscent of a pufferfish. They have a functional mouth and eyes, and must eat within a few days. Brackish-water puffers may breed in bays in a manner similar to marine species, or may breed more similarly to the freshwater species, in cases where they have moved far enough upriver. Reproduction in freshwater species varies quite a bit. The dwarf puffers court with males following females, possibly displaying the crests and keels unique to this subgroup of species. After the female accepts his advances, she will lead the male into plants or another form of cover, where she can release eggs for fertilization. The male may help her by rubbing against her side. This has been observed in captivity, and they are the only commonly captive-spawned puffer species. Target-group puffers have also been spawned in aquaria, and follow a similar courting behavior, minus the crest/keel display. Eggs are laid, though, on a flat piece of slate or other smooth, hard material, to which they adhere. The male will guard them until they hatch, carefully blowing water over them regularly to keep the eggs healthy. His parenting is finished when the young hatch and the fry are on their own. In 2012, males of the species Torquigener albomaculosus were documented while carving large and complex geometric, circular structures in the seabed sand in Amami Ōshima, Japan. The structures serve to attract females and to provide a safe place for them to lay their eggs. Information on breeding of specific species is very limited. T. nigroviridis, the green-spotted puffer, has recently been spawned artificially under captive conditions. It is believed to spawn in bays in a similar manner to saltwater species, as their sperm was found to be motile only at full marine salinities, but wild breeding has never been observed. Xenopterus naritus has been reported to be the first bred artificially in Sarawak, Northwestern Borneo, in June 2016, and the main purpose was for development of aquaculture of the species. Diet Pufferfish diets can vary depending on their environment. Traditionally, their diet consists mostly of algae and small invertebrates. They can survive on a completely vegetarian diet if their environment is lacking resources, but prefer an omnivorous food selection. Larger species of pufferfish are able to use their beak-like front teeth to break open clams, mussels, and other shellfish. Some species of pufferfish have also been known to enact various hunting techniques ranging from ambush to open-water hunting. Evolution The tetraodontids have been estimated to have diverged from diodontids between 89 and 138 million years ago. The four major clades diverged during the Cretaceous between 80 and 101 million years ago. The oldest known pufferfish genus is Eotetraodon, from the Lutetian epoch of Middle Eocene Europe, with fossils found in Monte Bolca and the Caucasus Mountains. The Monte Bolca species, E. pygmaeus, coexisted with several other tetraodontiforms, including an extinct species of diodontid, primitive boxfish (Proaracana and Eolactoria), and other, totally extinct forms, such as Zignoichthys and the spinacanthids. The extinct genus, Archaeotetraodon is known from Miocene-aged fossils from Europe. Poisoning Pufferfish can be lethal if not served properly. Puffer poisoning usually results from consumption of incorrectly prepared puffer soup, fugu chiri, or occasionally from raw puffer meat, sashimi fugu. While chiri is much more likely to cause death, sashimi fugu often causes intoxication, light-headedness, and numbness of the lips. Pufferfish tetrodotoxin deadens the tongue and lips, and induces dizziness and vomiting, followed by numbness and prickling over the body, rapid heart rate, decreased blood pressure, and muscle paralysis. The toxin paralyzes the diaphragm muscle and stops the person who has ingested it from breathing. People who live longer than 24 hours typically survive, although possibly after a coma lasting several days. The source of tetrodotoxin in puffers has been a matter of debate, but it is increasingly accepted that bacteria in the fish's intestinal tract are the source. Saxitoxin, the cause of paralytic shellfish poisoning and red tide, can also be found in certain puffers. Philippines In September 2012, the Bureau of Fisheries and Aquatic Resources in the Philippines issued a warning not to eat puffer fish, after local fishermen died upon consuming puffer fish for dinner. The warning indicated that puffer fish toxin is 100 times more potent than cyanide. Thailand Pufferfish, called pakapao in Thailand, are usually consumed by mistake. They are often cheaper than other fish, and because they contain inconsistent levels of toxins between fish and season, there is little awareness or monitoring of the danger. Consumers are regularly hospitalized and some even die from the poisoning. United States Cases of neurological symptoms, including numbness and tingling of the lips and mouth, have been reported to rise after the consumption of puffers caught in the area of Titusville, Florida, US. The symptoms generally resolve within hours to days, although one affected individual required intubation for 72 hours. As a result, Florida banned the harvesting of puffers from certain bodies of water. Treatment Treatment is mainly supportive and consists of intestinal decontamination with gastric lavage and activated charcoal, and life-support until the toxin is metabolized. Case reports suggest anticholinesterases such as edrophonium may be effective.
Biology and health sciences
Acanthomorpha
null
23577
https://en.wikipedia.org/wiki/Partial%20function
Partial function
In mathematics, a partial function from a set to a set is a function from a subset of (possibly the whole itself) to . The subset , that is, the domain of viewed as a function, is called the domain of definition or natural domain of . If equals , that is, if is defined on every element in , then is said to be a total function. In other words, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation. This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set. A partial function is often used when its exact domain of definition is not known, or is difficult to specify. However, even when the exact domain of definition is known, partial functions are often used for simplicity or brevity. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator; in this context, a partial function is generally simply called a . In computability theory, a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total. When arrow notation is used for functions, a partial function from to is sometimes written as or However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings. Specifically, for a partial function and any one has either: (it is a single element in ), or is undefined. For example, if is the square root function restricted to the integers defined by: if, and only if, then is only defined if is a perfect square (that is, ). So but is undefined. Basic concepts A partial function arises from the consideration of maps between two sets and that may not be defined on the entire set . A common example is the square root operation on the real numbers : because negative real numbers do not have real square roots, the operation can be viewed as a partial function from to The domain of definition of a partial function is the subset of on which the partial function is defined; in this case, the partial function may also be viewed as a function from to . In the example of the square root operation, the set consists of the nonnegative real numbers The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem. In case the domain of definition is equal to the whole set , the partial function is said to be total. Thus, total partial functions from to coincide with functions from to . Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively. Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function. The notion of transformation can be generalized to partial functions as well. A partial transformation is a function where both and are subsets of some set Function spaces For convenience, denote the set of all partial functions from a set to a set by This set is the union of the sets of functions defined on subsets of with same codomain : the latter also written as In finite case, its cardinality is because any partial function can be extended to a function by any fixed value not contained in so that the codomain is an operation which is injective (unique and invertible by restriction). Discussion and examples The first diagram at the top of the article represents a partial function that is a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set. Natural logarithm Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function. Subtraction of natural numbers Subtraction of natural numbers (in which is the non-negative integers) is a partial function: It is defined only when Bottom element In denotational semantics a partial function is considered as returning the bottom element when it is undefined. In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested. In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function. In category theory In category theory, when considering the operation of morphism composition in concrete categories, the composition operation is a total function if and only if has one element. The reason for this is that two morphisms and can only be composed as if that is, the codomain of must equal the domain of The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science." The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category. In abstract algebra Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined). The set of all partial functions (partial transformations) on a given base set, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on ), typically denoted by The set of all partial bijections on forms the symmetric inverse semigroup. Charts and atlases for manifolds and fiber bundles Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps. The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined.
Mathematics
Functions: General
null
23579
https://en.wikipedia.org/wiki/Photoelectric%20effect
Photoelectric effect
The photoelectric effect is the emission of electrons from a material caused by electromagnetic radiation such as ultraviolet light. Electrons emitted in this manner are called photoelectrons. The phenomenon is studied in condensed matter physics, solid state, and quantum chemistry to draw inferences about the properties of atoms, molecules and solids. The effect has found use in electronic devices specialized for light detection and precisely timed electron emission. The experimental results disagree with classical electromagnetism, which predicts that continuous light waves transfer energy to electrons, which would then be emitted when they accumulate enough energy. An alteration in the intensity of light would theoretically change the kinetic energy of the emitted electrons, with sufficiently dim light resulting in a delayed emission. The experimental results instead show that electrons are dislodged only when the light exceeds a certain frequency—regardless of the light's intensity or duration of exposure. Because a low-frequency beam at a high intensity does not build up the energy required to produce photoelectrons, as would be the case if light's energy accumulated over time from a continuous wave, Albert Einstein proposed that a beam of light is not a wave propagating through space, but a swarm of discrete energy packets, known as photons—term coined by Gilbert N. Lewis in 1926. Emission of conduction electrons from typical metals requires a few electron-volt (eV) light quanta, corresponding to short-wavelength visible or ultraviolet light. In extreme cases, emissions are induced with photons approaching zero energy, like in systems with negative electron affinity and the emission from excited states, or a few hundred keV photons for core electrons in elements with a high atomic number. Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality. Other phenomena where light affects the movement of electric charges include the photoconductive effect, the photovoltaic effect, and the photoelectrochemical effect. Emission mechanism The photons of a light beam have a characteristic energy, called photon energy, which is proportional to the frequency of the light. In the photoemission process, when an electron within some material absorbs the energy of a photon and acquires more energy than its binding energy, it is likely to be ejected. If the photon energy is too low, the electron is unable to escape the material. Since an increase in the intensity of low-frequency light will only increase the number of low-energy photons, this change in intensity will not create any single photon with enough energy to dislodge an electron. Moreover, the energy of the emitted electrons will not depend on the intensity of the incoming light of a given frequency, but only on the energy of the individual photons. While free electrons can absorb any energy when irradiated as long as this is followed by an immediate re-emission, like in the Compton effect, in quantum systems all of the energy from one photon is absorbed—if the process is allowed by quantum mechanics—or none at all. Part of the acquired energy is used to liberate the electron from its atomic binding, and the rest contributes to the electron's kinetic energy as a free particle. Because electrons in a material occupy many different quantum states with different binding energies, and because they can sustain energy losses on their way out of the material, the emitted electrons will have a range of kinetic energies. The electrons from the highest occupied states will have the highest kinetic energy. In metals, those electrons will be emitted from the Fermi level. When the photoelectron is emitted into a solid rather than into a vacuum, the term internal photoemission is often used, and emission into a vacuum is distinguished as external photoemission. Experimental observation of photoelectric emission Even though photoemission can occur from any material, it is most readily observed from metals and other conductors. This is because the process produces a charge imbalance which, if not neutralized by current flow, results in the increasing potential barrier until the emission completely ceases. The energy barrier to photoemission is usually increased by nonconductive oxide layers on metal surfaces, so most practical experiments and devices based on the photoelectric effect use clean metal surfaces in evacuated tubes. Vacuum also helps observing the electrons since it prevents gases from impeding their flow between the electrodes. Sunlight is an inconsistent and variable source of ultraviolet light. Cloud cover, ozone concentration, altitude, and surface reflection all alter the amount of UV. Laboratory sources of UV are based on xenon arc lamps or, for more uniform but weaker light, fluorescent lamps. More specialized sources include ultraviolet lasers and synchrotron radiation. The classical setup to observe the photoelectric effect includes a light source, a set of filters to monochromatize the light, a vacuum tube transparent to ultraviolet light, an emitting electrode (E) exposed to the light, and a collector (C) whose voltage VC can be externally controlled. A positive external voltage is used to direct the photoemitted electrons onto the collector. If the frequency and the intensity of the incident radiation are fixed, the photoelectric current I increases with an increase in the positive voltage, as more and more electrons are directed onto the electrode. When no additional photoelectrons can be collected, the photoelectric current attains a saturation value. This current can only increase with the increase of the intensity of light. An increasing negative voltage prevents all but the highest-energy electrons from reaching the collector. When no current is observed through the tube, the negative voltage has reached the value that is high enough to slow down and stop the most energetic photoelectrons of kinetic energy Kmax. This value of the retarding voltage is called the stopping potential or cut off potential Vo. Since the work done by the retarding potential in stopping the electron of charge e is eVo, the following must hold eVo = Kmax. The current-voltage curve is sigmoidal, but its exact shape depends on the experimental geometry and the electrode material properties. For a given metal surface, there exists a certain minimum frequency of incident radiation below which no photoelectrons are emitted. This frequency is called the threshold frequency. Increasing the frequency of the incident beam increases the maximum kinetic energy of the emitted photoelectrons, and the stopping voltage has to increase. The number of emitted electrons may also change because the probability that each photon results in an emitted electron is a function of photon energy. An increase in the intensity of the same monochromatic light (so long as the intensity is not too high), which is proportional to the number of photons impinging on the surface in a given time, increases the rate at which electrons are ejected—the photoelectric current I—but the kinetic energy of the photoelectrons and the stopping voltage remain the same. For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light. The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10−9 second. Angular distribution of the photoelectrons is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy. Theoretical explanation In 1905, Einstein proposed a theory of the photoelectric effect using a concept that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy that is proportional to the frequency of the corresponding electromagnetic wave. The proportionality constant has become known as the Planck constant. In the range of kinetic energies of the electrons that are removed from their varying atomic bindings by the absorption of a photon of energy , the highest kinetic energy is Here, is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted or . If the work function is written as the formula for the maximum kinetic energy of the ejected electrons becomes Kinetic energy is positive, and is required for the photoelectric effect to occur. The frequency is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics. Photoemission from atoms, molecules and solids Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy is found at kinetic energy . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics. Models of photoemission from solids The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model for ultraviolet and soft X-ray excitation decomposes the effect into these steps: Inner photoelectric effect in the bulk of the material that is a direct optical transition between an occupied and an unoccupied electronic state. This effect is subject to quantum-mechanical selection rules for dipole transitions. The hole left behind the electron can give rise to secondary electron emission, or the so-called Auger effect, which may be visible even when the primary photoelectron does not leave the material. In molecular solids phonons are excited in this step and may be visible as satellite lines in the final electron energy. Electron propagation to the surface in which some electrons may be scattered because of interactions with other constituents of the solid. Electrons that originate deeper in the solid are much more likely to suffer collisions and emerge with altered energy and momentum. Their mean-free path is a universal curve dependent on electron's energy. Electron escape through the surface barrier into free-electron-like states of the vacuum. In this step the electron loses energy in the amount of the work function of the surface, and suffers from the momentum loss in the direction perpendicular to the surface. Because the binding energy of electrons in solids is conveniently expressed with respect to the highest occupied state at the Fermi energy , and the difference to the free-space (vacuum) energy is the work function of the surface, the kinetic energy of the electrons emitted from solids is usually written as . There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside. History 19th century In 1839, Alexandre Edmond Becquerel discovered the related photovoltaic effect while studying the effect of light on electrolytic cells. Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables. Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg, investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. They arranged metals with respect to their power of discharging negative electricity: rubidium, potassium, alloy of potassium and sodium, sodium, lithium, magnesium, thallium and zinc; for copper, platinum, lead, iron, cadmium, carbon, and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect. In 1887, Heinrich Hertz observed the photoelectric effect and reported on the production and reception of electromagnetic waves. The receiver in his apparatus consisted of a coil with a spark gap, where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation. The discoveries by Hertz led to a series of investigations by Wilhelm Hallwachs, Hoor, Augusto Righi and Aleksander Stoletov on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope. He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light. With regard to the Hertz effect, the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum. In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent; this property was used for the creation of solar cells. Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt and O. Knoblauch compiled a list of these substances. In 1897, J. J. Thomson investigated ultraviolet light in Crookes tubes. Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays. These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current. During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas. 20th century In 1902, Lenard observed that the energy of individual emitted electrons was independent of the applied light intensity. This appeared to be at odds with Maxwell's wave theory of light, which predicted that the electron energy would be proportional to the intensity of the radiation. Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface. Initial investigation of the photoelectric effect in gasses by Lenard were followed up by J. J. Thomson and then more decisively by Frederic Palmer Jr. The gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard. In 1900, while studying black-body radiation, the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a step in the development of quantum mechanics. In 1914, Robert A. Millikan's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect". In quantum perturbation theory of atoms and solids acted upon by electromagnetic radiation, the photoelectric effect is still commonly analyzed in terms of waves; the two approaches are equivalent because photon or wave absorption can only happen between quantized energy levels whose energy difference is that of the energy of photon. Albert Einstein's mathematical description of how the photoelectric effect was caused by absorption of quanta of light was in one of his Annus Mirabilis papers, named "On a Heuristic Viewpoint Concerning the Production and Transformation of Light". The paper proposed a simple description of energy quanta, and showed how they explained the blackbody radiation spectrum. His explanation in terms of absorption of discrete quanta of light agreed with experimental results. It explained why the energy of photoelectrons was not dependent on incident light intensity. This was a theoretical leap, but the concept was strongly resisted at first because it contradicted the wave theory of light that followed naturally from James Clerk Maxwell's equations of electromagnetism, and more generally, the assumption of infinite divisibility of energy in physical systems. Einstein's work predicted that the energy of individual ejected electrons increases linearly with the frequency of the light. The precise relationship had not at that time been tested. By 1905 it was known that the energy of photoelectrons increases with increasing frequency of incident light and is independent of the intensity of the light. However, the manner of the increase was not experimentally determined until 1914 when Millikan showed that Einstein's prediction was correct. The photoelectric effect helped to propel the then-emerging concept of wave–particle duality in the nature of light. Light simultaneously possesses the characteristics of both waves and particles, each being manifested according to the circumstances. The effect was impossible to understand in terms of the classical wave description of light, as the energy of the emitted electrons did not depend on the intensity of the incident radiation. Classical theory predicted that the electrons would 'gather up' energy over a period of time, and then be emitted. Uses and effects Photomultipliers These are extremely light-sensitive vacuum tubes with a coated photocathode inside the envelope. The photo cathode contains combinations of materials such as cesium, rubidium, and antimony specially selected to provide a low work function, so when illuminated even by very low levels of light, the photocathode readily releases electrons. By means of a series of electrodes (dynodes) at ever-higher potentials, these electrons are accelerated and substantially increased in number through secondary emission to provide a readily detectable output current. Photomultipliers are still commonly used wherever low levels of light must be detected. Image sensors Video camera tubes in the early days of television used the photoelectric effect. For example, Philo Farnsworth's "Image dissector" used a screen charged by the photoelectric effect to transform an optical image into a scanned electronic signal. Photoelectron spectroscopy Because the kinetic energy of the emitted electrons is exactly the energy of the incident photon minus the energy of the electron's binding within an atom, molecule or solid, the binding energy can be determined by shining a monochromatic X-ray or UV light of a known energy and measuring the kinetic energies of the photoelectrons. The distribution of electron energies is valuable for studying quantum properties of these systems. It can also be used to determine the elemental composition of the samples. For solids, the kinetic energy and emission angle distribution of the photoelectrons is measured for the complete determination of the electronic band structure in terms of the allowed binding energies and momenta of the electrons. Modern instruments for angle-resolved photoemission spectroscopy are capable of measuring these quantities with a precision better than 1 meV and 0.1°. Photoelectron spectroscopy measurements are usually performed in a high-vacuum environment, because the electrons would be scattered by gas molecules if they were present. However, some companies are now selling products that allow photoemission in air. The light source can be a laser, a discharge tube, or a synchrotron radiation source. The concentric hemispherical analyzer is a typical electron energy analyzer. It uses an electric field between two hemispheres to change (disperse) the trajectories of incident electrons depending on their kinetic energies. Night vision devices Photons hitting a thin film of alkali metal or semiconductor material such as gallium arsenide in an image intensifier tube cause the ejection of photoelectrons due to the photoelectric effect. These are accelerated by an electrostatic field where they strike a phosphor coated screen, converting the electrons back into photons. Intensification of the signal is achieved either through acceleration of the electrons or by increasing the number of electrons through secondary emissions, such as with a micro-channel plate. Sometimes a combination of both methods is used. Additional kinetic energy is required to move an electron out of the conduction band and into the vacuum level. This is known as the electron affinity of the photocathode and is another barrier to photoemission other than the forbidden band, explained by the band gap model. Some materials such as gallium arsenide have an effective electron affinity that is below the level of the conduction band. In these materials, electrons that move to the conduction band all have sufficient energy to be emitted from the material, so the film that absorbs photons can be quite thick. These materials are known as negative electron affinity materials. Spacecraft The photoelectric effect will cause spacecraft exposed to sunlight to develop a positive charge. This can be a major problem, as other parts of the spacecraft are in shadow which will result in the spacecraft developing a negative charge from nearby plasmas. The imbalance can discharge through delicate electrical components. The static charge created by the photoelectric effect is self-limiting, because a higher charged object does not give up its electrons as easily as a lower charged object does. Moon dust Light from the Sun hitting lunar dust causes it to become positively charged from the photoelectric effect. The charged dust then repels itself and lifts off the surface of the Moon by electrostatic levitation. This manifests itself almost like an "atmosphere of dust", visible as a thin haze and blurring of distant features, and visible as a dim glow after the sun has set. This was first photographed by the Surveyor program probes in the 1960s, and most recently the Chang'e 3 rover observed dust deposition on lunar rocks as high as about 28 cm. It is thought that the smallest particles are repelled kilometers from the surface and that the particles move in "fountains" as they charge and discharge. Competing processes and photoemission cross section When photon energies are as high as the electron rest energy of , yet another process, Compton scattering, may occur. Above twice this energy, at , pair production is also more likely. Compton scattering and pair production are examples of two other competing mechanisms. Even if the photoelectric effect is the favoured reaction for a particular interaction of a single photon with a bound electron, the result is also subject to quantum statistics and is not guaranteed. The probability of the photoelectric effect occurring is measured by the cross section of the interaction, σ. This has been found to be a function of the atomic number of the target atom and photon energy. In a crude approximation, for photon energies above the highest atomic binding energy, the cross section is given by: Here Z is the atomic number and n is a number which varies between 4 and 5. The photoelectric effect rapidly decreases in significance in the gamma-ray region of the spectrum, with increasing photon energy. It is also more likely from elements with high atomic number. Consequently, high-Z materials make good gamma-ray shields, which is the principal reason why lead (Z = 82) is preferred and most widely used.
Physical sciences
Basics_9
null
23580
https://en.wikipedia.org/wiki/Paleogene
Paleogene
The Paleogene Period ( ; also spelled Palaeogene or Palæogene) is a geologic period and system that spans 43 million years from the end of the Cretaceous Period Ma (million years ago) to the beginning of the Neogene Period Ma. It is the first period of the Cenozoic Era, the tenth period of the Phanerozoic and is divided into the Paleocene, Eocene, and Oligocene epochs. The earlier term Tertiary Period was used to define the time now covered by the Paleogene Period and subsequent Neogene Period; despite no longer being recognized as a formal stratigraphic term, "Tertiary" still sometimes remains in informal use. Paleogene is often abbreviated "Pg", although the United States Geological Survey uses the abbreviation "" for the Paleogene on the Survey's geologic maps. Much of the world's modern vertebrate diversity originated in a rapid surge of diversification in the early Paleogene, as survivors of the Cretaceous–Paleogene extinction event took advantage of empty ecological niches left behind by the extinction of the non-avian dinosaurs, pterosaurs, marine reptiles, and primitive fish groups. Mammals continued to diversify from relatively small, simple forms into a highly diverse group ranging from small-bodied forms to very large ones, radiating into multiple orders and colonizing the air and marine ecosystems by the Eocene. Birds, the only surviving group of dinosaurs, quickly diversified from the very few neognath and paleognath clades that survived the extinction event, also radiating into multiple orders, colonizing different ecosystems and achieving an extreme level of morphological diversity. Percomorph fish, the most diverse group of vertebrates today, first appeared near the end of the Cretaceous but saw a very rapid radiation into their modern order and family-level diversity during the Paleogene, achieving a diverse array of morphologies. The Paleogene is marked by considerable changes in climate from the Paleocene–Eocene Thermal Maximum, through global cooling during the Eocene to the first appearance of permanent ice sheets in the Antarctic at the beginning of the Oligocene. Geology Stratigraphy The Paleogene is divided into three series/epochs: the Paleocene, Eocene, and Oligocene. These stratigraphic units can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. Paleocene The Paleocene is the first series/epoch of the Paleogene and lasted from 66.0 Ma to 56.0 Ma. It is divided into three stages: the Danian 66.0 - 61.6 Ma; Selandian 61.6 - 59.2 Ma; and, Thanetian 59.2 - 56.0 Ma. The GSSP for the base of the Cenozoic, Paleogene and Paleocene is at Oued Djerfane, west of El Kef, Tunisia. It is marked by an iridium anomaly produced by an asteroid impact, and is associated with the Cretaceous–Paleogene extinction event. The boundary is defined as the rusty colored base of a 50 cm thick clay, which would have been deposited over only a few days. Similar layers are seen in marine and continental deposits worldwide. These layers include the iridium anomaly, microtektites, nickel-rich spinel crystals and shocked quartz, all indicators of a major extraterrestrial impact. The remains of the crater are found at Chicxulub on the Yucatan Peninsula in Mexico. The extinction of the non-avian dinosaurs, ammonites and dramatic changes in marine plankton and many other groups of organisms, are also used for correlation purposes. Eocene The Eocene is the second series/epoch of the Paleogene, and lasted from 56.0 Ma to 33.9 Ma. It is divided into four stages: the Ypresian 56.0 Ma to 47.8 Ma; Lutetian 47.8 Ma to 41.2 Ma; Bartonian 41.2 Ma to 37.71 Ma; and, Priabonian 37.71 Ma to 33.9 Ma. The GSSP for the base of the Eocene is at Dababiya, near Luxor, Egypt and is marked by the start of a significant variation in global carbon isotope ratios, produced by a major period of global warming. The change in climate was due to a rapid release of frozen methane clathrates from seafloor sediments at the beginning of the Paleocene-Eocene thermal maximum (PETM). Oligocene The Oligocene is the third and youngest series/epoch of the Paleogene, and lasted from 33.9 Ma to 23.03 Ma. It is divided into two stages: the Rupelian 33.9 Ma to 27.82 Ma; and, Chattian 27.82 - 23.03 Ma. The GSSP for the base of the Oligocene is at Massignano, near Ancona, Italy. The extinction the hantkeninid planktonic foraminifera is the key marker for the Eocene-Oligocene boundary, which was a time of climate cooling that led to widespread changes in fauna and flora. Palaeogeography The final stages of the breakup of Pangaea occurred during the Paleogene as Atlantic Ocean rifting and seafloor spreading extended northwards, separating the North America and Eurasian plates, and Australia and South America rifted from Antarctica, opening the Southern Ocean. Africa and India collided with Eurasia forming the Alpine-Himalayan mountain chains and the western margin of the Pacific plate changed from a divergent to convergent plate boundary. Alpine–Himalayan orogeny Alpine orogeny The Alpine orogeny developed in response to the collision between the African and Eurasian plates during the closing of the Neotethys Ocean and the opening of the Central Atlantic Ocean. The result was a series of arcuate mountain ranges, from the Tell-Rif-Betic cordillera in the western Mediterranean through the Alps, Carpathians, Apennines, Dinarides and Hellenides to the Taurides in the east. From the Late Cretaceous into the early Paleocene, Africa began to converge with Eurasia. The irregular outlines of the continental margins, including the Adriatic promontory (Adria) that extended north from the African plate, led to the development of several short subduction zones, rather than one long system. In the western Mediterranean, the European plate was subducted southwards beneath the African plate, whilst in the eastern Mediterranean, Africa was subducted beneath Eurasia along a northward dipping subduction zone. Convergence between the Iberian and European plates led to the Pyrenean orogeny and, as Adria pushed northwards the Alps and Carpathian orogens began to develop. The collision of Adria with Eurasia in the early Palaeocene was followed by a c.10 million year pause in the convergence of Africa and Eurasia, connected with the onset of the opening of the North Atlantic Ocean as Greenland rifted from the Eurasian plate in the Palaeocene. Convergence rates between Africa and Eurasia increased again in the early Eocene and the remaining oceanic basins between Adria and Europe closed. Between about 40 and 30 Ma, subduction began along the western Mediterranean arc of the Tell, Rif, Betic and Apennine mountain chains. The rate of convergence was less than the subduction rate of the dense lithosphere of the western Mediterranean and roll-back of the subducting slab led to the arcuate structure of these mountain ranges. In the eastern Mediterranean, c. 35 Ma, the Anatolide-Tauride platform (northern part of Adria) began to enter the trench leading to the development of the Dinarides, Hellenides and Tauride mountain chains as the passive margin sediments of Adria were scrapped off onto the Eurasia crust during subduction. Zagros Mountains The Zagros mountain belt stretches for c. 2000 km from the eastern border of Iraq to the Makran coast in southern Iran. It formed as a result of the convergence and collision of the Arabian and Eurasian plates as the Neotethys Ocean closed and is composed sediments scrapped from the descending Arabian Plate. From the Late Cretaceous, a volcanic arc developed on the Eurasia margin as the Neotethys crust was subducted beneath it. A separate intra-oceanic subduction zone in the Neotethys resulted in the obuction of ocean crust onto the Arabian margin in the Late Cretaceous to Paleocene, with break-off of the subducted oceanic plate close to the Arabian margin occurring during the Eocene. Continental collision began during the Eocene c. 35 Ma and continued into the Oligocene to c. 26 Ma. Himalayan orogeny The Indian continent rifted from Madagascar at c. 83 Ma and drifted rapidly (c. 18 cm/yr in the Paleocene) northwards towards the southern margin of Eurasia. A rapid decrease in velocity to c. 5 cm/yr in the early Eocene records the collision of the Tethyan (Tibetan) Himalayas, the leading edge of Greater India, with the Lhasa terrane of Tibet (southern Eurasian margin), along the Indus-Yarling-Zangbo suture zone. To the south of this zone, the Himalaya are composed of metasedimentary rocks scraped off the now subducted Indian continental crust and mantle lithosphere as the collision progressed. Palaeomagnetic data place the present day Indian continent further south at the time of collision and decrease in plate velocity, indicating the presence of a large region to the north of India that has now been subducted beneath the Eurasian plate or incorporated into the mountain belt. This region, known as Greater India, formed by extension along the northern margin of India during the opening of the Neotethys. The Tethyan Himalaya block lay along its northern edge, with the Neotethys Ocean lying between it and southern Eurasia. Debate about the amount of deformation seen in the geological record in the India–Eurasia collision zone versus the size of Greater India, the timing and nature of the collision relative to the decrease in plate velocity, and explanations for the unusually high velocity of the Indian plate have led to several models for Greater India: 1) A Late Cretaceous to early Paleocene subduction zone may have lain between India and Eurasia in the Neotethys, dividing the region into two plates, subduction was followed by collision of India with Eurasia in the middle Eocene. In this model Greater India would have been less than 900 km wide; 2) Greater India may have formed a single plate, several thousand kilometres wide, with the Tethyan Himalaya microcontinent separated from the Indian continent by an oceanic basin. The microcontinent collided with southern Eurasia c. 58 Ma (late Paleocene), whilst the velocity of the plate did not decrease until c. 50 Ma when subduction rates dropped as young, oceanic crust entered the subduction zone; 3) This model assigns older dates to parts of Greater India, which changes its paleogeographic position relative to Eurasia and creates a Greater India formed of extended continental crust 2000–3000 km wide. South East Asia The Alpine-Himalayan Orogenic Belt in Southeast Asia extends from the Himalayas in India through Myanmar (West Burma block) Sumatra, Java to West Sulawesi. During the Late Cretaceous to Paleogene, the northward movement of the Indian plate led to the highly oblique subduction of the Neotethys along the edge of the West Burma block and the development of a major north-south transform fault along the margin of Southeast Asia to the south. Between c. 60 and 50 Ma, the leading northeastern edge of Greater India collided with the West Burma block resulting in deformation and metamorphism. During the middle Eocene, north-dipping subduction resumed along the southern edge of Southeast Asia, from west Sumatra to West Sulawesi, as the Australian plate drifted slowly northwards. Collision between India and the West Burma block was complete by the late Oligocene. As the India-Eurasia collision continued, movement of material away from the collision zone was accommodated along, and extended, the already existing major strike slip systems of the region. Atlantic Ocean During the Paleocene, seafloor spreading along the Mid-Atlantic Ridge propagated from the Central Atlantic northwards between North America and Greenland in the Labrador Sea (c. 62 Ma) and Baffin Bay (c. 57 Ma), and, by the early Eocene (c. 54 Ma), into the northeastern Atlantic between Greenland and Eurasia. Extension between North America and Eurasia, also in the early Eocene, led to the opening of the Eurasian Basin across the Arctic, which was linked to the Baffin Bay Ridge and Mid-Atlantic Ridge to the south via major strike slip faults. From the Eocene and into the early Oligocene, Greenland acted as an independent plate moving northwards and rotating anticlockwise. This led to compression across the Canadian Arctic Archipelago, Svalbard and northern Greenland resulting in the Eureka orogeny. From c. 47 Ma, the eastern margin of Greenland was cut by the Reykjanes Ridge (the northeastern branch of the Mid-Atlantic Ridge) propagating northwards and splitting off the Jan Mayen microcontinent. After c. 33 Ma seafloor spreading in Labrador Sea and Baffin Bay gradually ceased and seafloor spreading focused along the northeast Atlantic. By the late Oligocene, the plate boundary between North America and Eurasia was established along the Mid-Atlantic Ridge, with Greenland attached to the North American plate again, and the Jan Mayen microcontinent part of the Eurasian plate, where its remains now lie to the east and possibly beneath the southeast of Iceland. North Atlantic Large Igneous Province The North Atlantic Igneous Province stretches across the Greenland and northwest European margins and is associated with the proto-Icelandic mantle plume, which rose beneath the Greenland lithosphere at c. 65 Ma. There were two main phases of volcanic activity with peaks at c. 60 Ma and c. 55 Ma. Magmatism in the British and Northwest Atlantic volcanic provinces occurred mainly in the early Palaeocene, the latter associated with an increased spreading rate in the Labrador Sea, whilst northeast Atlantic magmatism occurred mainly during the early Eocene and is associated with a change in the spreading direction in the Labrador Sea and the northward drift of Greenland. The locations of the magmatism coincide with the intersection of propagating the rifts and large-scale, pre-existing lithospheric structures, which acted as channels to the surface for the magma. The arrival of the proto-Iceland plume has been considered the driving mechanism for rifting in the North Atlantic. However, that rifting and initial seafloor spreading occurred prior to the arrival of the plume, large scale magmatism occurred at a distance to rifting, and that rifting propagated towards, rather than away from the plume, has led to the suggestion the plume and associated magmatism may have been a result, rather than a cause, of the plate tectonic forces that led to the propagation of rifting from the Central to the North Atlantic. Americas North America Mountain building continued along the North America Cordillera in response to subduction of the Farallon plate beneath the North American Plate. Along the central section of the North American margin, crustal shortening of the Cretaceous to Paleocene Sevier orogen lessened and deformation moved eastward. The decreasing dip of the subducting Farallon plate led to a flat-slab segment that increased friction between this and the base of the North American Plate. The resulting Laramide orogeny, which began the development of the Rocky Mountains, was a broad zone of thick-skinned deformation, with faults extending to mid-crustal depths and the uplift of basement rocks that lay to the east of the Sevier belt, and more than 700km from the trench. With the Laramide uplift the Western Interior Seaway was divided and then retreated. During the mid to late Eocene (50–35 Ma), plate convergence rates decreased and the dip of the Farallon slab began to steepen. Uplift ceased and the region largely levelled by erosion. By the Oligocene, convergence gave way to extension, rifting and widespread volcanism across the Laramide belt. South America Ocean-continent convergence accommodated by east dipping subduction zone of the Farallon plate beneath the western edge of South America continued from the Mesozoic. Over the Paleogene, changes in plate motion and episodes of regional slab shallowing and steepening resulted in variations in the magnitude of crustal shortening and amounts of magmatism along the length of the Andes. In the Northern Andes, an oceanic plateau with volcanic arc was accreted during the latest Cretaceous and Paleocene, whilst the Central Andes were dominated by the subduction of oceanic crust and the Southern Andes were impacted by the subduction of the Farallon-East Antarctic ocean ridge. Caribbean The Caribbean plate is largely composed of oceanic crust of the Caribbean Large Igneous Province that formed during the Late Cretaceous. During the Late Cretaceous to Paleocene, subduction of Atlantic crust was established along its northern margin, whilst to the southwest, an island arc collided with the northern Andes forming an east dipping subduction zone where Caribbean lithosphere was subducted beneath the South American margin. During the Eocene (c. 45 Ma), subduction of the Farallon plate along the Central American subduction zone was (re)established. Subduction along the northern section of the Caribbean volcanic arc ceased as the Bahamas carbonate platform collided with Cuba and was replaced by strike-slip movements as a transform fault, extending from the Mid-Atlantic Ridge, connected with the northern boundary of the Caribbean Plate. Subduction now focused along the southern Caribbean arc (Lesser Antilles). By the Oligocene, the intra-oceanic Central American volcanic arc began to collide with northwestern South American. Pacific Ocean At the beginning of the Paleogene, the Pacific Ocean consisted of the Pacific, Farallon, Kula and Izanagi plates. The central Pacific plate grew by seafloor spreading as the other three plates were subducted and broken up. In the southern Pacific, seafloor spreading continued from the Late Cretaceous across the Pacific–Antarctic, Pacific-Farallon and Farallon–Antarctic mid ocean ridges. The Izanagi-Pacific spreading ridge lay nearly parallel to the East Asian subduction zone and between 60–50 Ma the spreading ridge began to be subducted. By c. 50 Ma, the Pacific plate was no longer surrounded by spreading ridges, but had a subduction zone along its western edge. This changed the forces acting on the Pacific plate and led to a major reorganisation of plate motions across the entire Pacific region. The resulting changes in stress between the Pacific and Philippine Sea plates initiated subduction along the Izu-Bonin-Mariana and Tonga-Kermadec arcs. Subduction of the Farallon plate beneath the American plates continued from the Late Cretaceous. The Kula-Farallon spreading ridge lay to its north until the Eocene (c. 55 Ma), when the northern section of the plate split forming the Vancouver/Juan de Fuca plate. In the Oligocene (c. 28 Ma), the first segment of the Pacific–Farallon spreading ridge entered the North American subduction zone near Baja California leading to major strike-slip movements and the formation of the San Andreas Fault. At the Paleogene-Neogene boundary, spreading ceased between the Pacific and Farallon plates and the Farallon plate split again forming the present date Nazca and Cocos plates. The Kula plate lay between Pacific plate and North America. To the north and northwest it was being subducted beneath the Aleutian trench. Spreading between the Kula and Pacific and Farallon plates ceased c. 40 Ma and the Kula plate became part of the Pacific Plate. Hawaii hotspot The Hawaiian-Emperor seamount chain formed above the Hawaiian hotspot. Originally thought to be stationary within the mantle, the hotspot is now considered to have drifted south during the Paleocene to early Eocene, as the Pacific plate moved north. At c. 47 Ma, movement of the hotspot ceased and the Pacific plate motion changed from northward to northwestward in response to the onset of subduction along its western margin. This resulted in a 60 degree bend in the seamount chain. Other seamount chains related to hotspots in the South Pacific show a similar change in orientation at this time. Antarctica Slow seafloor spreading continued between Australia and East Antarctica. Shallow water channels probably developed south of Tasmania opening the Tasmanian Passage in the Eocene and deep ocean routes opening from the mid Oligocene. Rifting between the Antarctic Peninsula and the southern tip of South America formed the Drake Passage and opened the Southern Ocean also during this time, completing the breakup of Gondwana. The opening of these passages and the creation of the Southern Ocean established the Antarctic Circumpolar Current. Glaciers began to build across the Antarctica continent that now lay isolated in the south polar region and surrounded by cold ocean waters. These changes contributed to the fall in global temperatures and the beginning of icehouse conditions. Red Sea and East Africa Extensional stresses from the subduction zone along the northern Neotethys resulted in rifting between Africa and Arabia, forming the Gulf of Aden in the late Eocene. To the west, in the early Oligocene, flood basalts erupted across Ethiopia, northeast Sudan and southwest Yemen as the Afar mantle plume began to impact the base of the African lithosphere. Rifting across the southern Red Sea began in the mid Oligocene, and across the central and northern Red Sea regions in the late Oligocene and early Miocene. Climate Climatic conditions varied considerably during the Paleogene. After the disruption of the Chicxulub impact settled, a period of cool and dry conditions continued from the Late Cretaceous. At the Paleocene-Eocene boundary global temperatures rose rapidly with the onset of the Paleocene-Eocene Thermal Maximum (PETM). By the middle Eocene, temperatures began to drop again and by the late Eocene (c. 37 Ma) had decreased sufficiently for ice sheets to form in Antarctica. The global climate entered icehouse conditions at the Eocene-Oligocene boundary and the present day Late Cenozoic ice age began. The Paleogene began with the brief but intense "impact winter" caused by the Chicxulub impact, which was followed by an abrupt period of warming. After temperatures stabilised, the steady cooling and drying of the Late Cretaceous-Early Paleogene Cool Interval that had spanned the last two ages of the Late Cretaceous continued, with only the brief interruption of the Latest Danian Event (c. 62.2 Ma) when global temperatures rose. There is no evidence for ice sheets at the poles during the Paleocene. The relatively cool conditions were brought to an end by the Thanetian Thermal Event, and the beginning of the PETM. This was one of the warmest times of the Phanerozoic eon, during which global mean surface temperatures increased to 31.6 °C. According to a study published in 2018, from about 56 to 48 Ma, annual air temperatures over land and at mid-latitude averaged about 23–29 °C (± 4.7 °C). For comparison, this was 10 to 15 °C higher than the current annual mean temperatures in these areas. This rapid rise in global temperatures and intense greenhouse conditions were due to a sudden increase in levels of atmospheric carbon dioxide (CO2) and other greenhouse gases. An accompanying rise in humidity is reflected in an increase in kaolinite in sediments, which forms by chemical weathering in hot, humid conditions. Tropical and subtropical forests flourished and extended into polar regions. Water vapour (a greenhouse gas) associated with these forests also contributed to the greenhouse conditions. The initial rise in global temperatures was related to the intrusion of magmatic sills into organic-rich sediments during volcanic activity in the North Atlantic Igneous Province, between about 56 and 54 Ma, which rapidly released large amounts of greenhouse gases into the atmosphere. This warming led to melting of frozen methane hydrates on continental slopes adding further greenhouses gases. It also reduced the rate of burial of organic matter as higher temperatures accelerated the rate of bacterial decomposition which released CO2 back into the oceans. The (relatively) sudden climatic changes associated with the PETM resulted in the extinction of some groups of fauna and flora and the rise of others. For example, with the warming of the Arctic Ocean, around 70% of deep sea foraminifera species went extinct, whilst on land many modern mammals, including primates, appeared. Fluctuating sea levels meant, during low stands, a land bridge formed across the Bering Straits between North America and Eurasia allowing the movement of land animals between the two continents. The PETM was followed by the less severe Eocene Thermal Maximum 2 (c. 53.69 Ma), and the Eocene Thermal Maximum 3 (c. 53 Ma). The early Eocene warm conditions were brought to an end by the Azolla event. This change of climate at about 48.5 Ma, is believed to have been caused by a proliferation of aquatic ferns from the genus Azolla, resulting in the sequestering of large amounts of CO2 from the atmosphere by the plants. From this time until about 34 Ma, there was a slow cooling trend known as the Middle-Late Eocene Cooling. As temperatures dropped at high latitudes the presence of cold water diatoms suggests sea ice was able to form in winter in the Arctic Ocean, and by the late Eocene (c. 37 Ma) there is evidence of glaciation in Antarctica. Changes in deep ocean currents, as Australia and South America moved away from Antarctica opening the Drake and Tasmanian passages, were responsible for the drop in global temperatures. The warm waters of the South Atlantic, Indian and South Pacific oceans extended southward into the opening Southern Ocean and became part of the cold circumpolar current. Dense polar waters sank into the deep oceans and moved northwards, reducing global ocean temperatures. This cooling may have occurred over less than 100,000 years and resulted in a widespread extinction in marine life. By the Eocene-Oligocene boundary, sediments deposited in the ocean from glaciers indicate the presence of an ice sheet in western Antarctica that extended to the ocean. The development of the circumpolar current led to changes in the oceans, which in turn reduced atmospheric CO2 further. Increasing upwellings of cold water stimulated the productivity of phytoplankton, and the cooler waters reduced the rate of bacterial decay of organic matter and promoted the growth of methane hydrates in marine sediments. This created a positive feedback cycle where global cooling reduced atmospheric CO2 and this reduction in CO2 lead to changes which further lowered global temperatures. The decrease in evaporation from the cooler oceans also reduced moisture in the atmosphere and increased aridity. By the early Oligocene, the North American and Eurasian tropical and subtropical forests were replaced by dry woodlands and widespread grasslands. The Early Oligocene Glacial Maximum lasted for about 200,000 years, and the global mean surface temperature continued to decrease gradually during the Rupelian. A drop in global sea levels during the mid Oligocene indicates major growth of the Antarctic glacial ice sheet. In the Late Oligocene, global temperatures began to warm slightly, though they continued to be significantly lower than during the previous epochs of the Paleogene and polar ice remained. Flora and fauna Tropical taxa diversified faster than those at higher latitudes after the Cretaceous–Paleogene extinction event, resulting in the development of a significant latitudinal diversity gradient. Mammals began a rapid diversification during this period. After the Cretaceous–Paleogene extinction event, which saw the demise of the non-avian dinosaurs, mammals began to evolve from a few small and generalized forms into most of the modern varieties we see presently. Some of these mammals evolved into large forms that dominated the land, while others became capable of living in marine, specialized terrestrial, and airborne environments. Those that adapted to the oceans became modern cetaceans and sirenians, while those that adapted to trees became primates, the group to which humans belong. Birds, extant dinosaurs which were already well established by the end of the Cretaceous, also experienced adaptive radiation as they took over the skies left empty by the now extinct pterosaurs. Some flightless birds such as penguins, ratites, and terror birds also filled niches left by the hesperornithes and other extinct dinosaurs. Myctophids first appeared in the Late Palaeocene or Early Eocene, and during the Eocene and most of the Oligocene were restricted to shelf seas before expanding their range into the open ocean during the warm climatic interval at the end of the Oligocene. Pronounced cooling in the Oligocene resulted in a massive floral shift, and many extant modern plants arose during this time. Grasses and herbs, such as Artemisia, began to proliferate, at the expense of tropical plants, which began to decrease. Conifer forests developed in mountainous areas. This cooling trend continued, with major fluctuation, until the end of the Pleistocene period. This evidence for this floral shift is found in the palynological record.
Physical sciences
Geological periods
null
23582
https://en.wikipedia.org/wiki/Preorder
Preorder
In mathematics, especially in order theory, a preorder or quasiorder is a binary relation that is reflexive and transitive. The name is meant to suggest that preorders are almost partial orders, but not quite, as they are not necessarily antisymmetric. A natural example of a preorder is the divides relation "x divides y" between integers, polynomials, or elements of a commutative ring. For example, the divides relation is reflexive as every integer divides itself. But the divides relation is not antisymmetric, because divides and divides . It is to this preorder that "greatest" and "lowest" refer in the phrases "greatest common divisor" and "lowest common multiple" (except that, for integers, the greatest common divisor is also the greatest for the natural order of the integers). Preorders are closely related to equivalence relations and (non-strict) partial orders. Both of these are special cases of a preorder: an antisymmetric preorder is a partial order, and a symmetric preorder is an equivalence relation. Moreover, a preorder on a set can equivalently be defined as an equivalence relation on , together with a partial order on the set of equivalence class. Like partial orders and equivalence relations, preorders (on a nonempty set) are never asymmetric. A preorder can be visualized as a directed graph, with elements of the set corresponding to vertices, and the order relation between pairs of elements corresponding to the directed edges between vertices. The converse is not true: most directed graphs are neither reflexive nor transitive. A preorder that is antisymmetric no longer has cycles; it is a partial order, and corresponds to a directed acyclic graph. A preorder that is symmetric is an equivalence relation; it can be thought of as having lost the direction markers on the edges of the graph. In general, a preorder's corresponding directed graph may have many disconnected components. As a binary relation, a preorder may be denoted or . In words, when one may say that b a or that a b, or that b to a. Occasionally, the notation ← or → is also used. Definition Let be a binary relation on a set so that by definition, is some subset of and the notation is used in place of Then is called a or if it is reflexive and transitive; that is, if it satisfies: Reflexivity: for all and Transitivity: if for all A set that is equipped with a preorder is called a preordered set (or proset). Preorders as partial orders on partitions Given a preorder on one may define an equivalence relation on such that The resulting relation is reflexive since the preorder is reflexive; transitive by applying the transitivity of twice; and symmetric by definition. Using this relation, it is possible to construct a partial order on the quotient set of the equivalence, which is the set of all equivalence classes of If the preorder is denoted by then is the set of -cycle equivalence classes: if and only if or is in an -cycle with . In any case, on it is possible to define if and only if That this is well-defined, meaning that its defining condition does not depend on which representatives of and are chosen, follows from the definition of It is readily verified that this yields a partially ordered set. Conversely, from any partial order on a partition of a set it is possible to construct a preorder on itself. There is a one-to-one correspondence between preorders and pairs (partition, partial order). : Let be a formal theory, which is a set of sentences with certain properties (details of which can be found in the article on the subject). For instance, could be a first-order theory (like Zermelo–Fraenkel set theory) or a simpler zeroth-order theory. One of the many properties of is that it is closed under logical consequences so that, for instance, if a sentence logically implies some sentence which will be written as and also as then necessarily (by modus ponens). The relation is a preorder on because always holds and whenever and both hold then so does Furthermore, for any if and only if ; that is, two sentences are equivalent with respect to if and only if they are logically equivalent. This particular equivalence relation is commonly denoted with its own special symbol and so this symbol may be used instead of The equivalence class of a sentence denoted by consists of all sentences that are logically equivalent to (that is, all such that ). The partial order on induced by which will also be denoted by the same symbol is characterized by if and only if where the right hand side condition is independent of the choice of representatives and of the equivalence classes. All that has been said of so far can also be said of its converse relation The preordered set is a directed set because if and if denotes the sentence formed by logical conjunction then and where The partially ordered set is consequently also a directed set. See Lindenbaum–Tarski algebra for a related example. Relationship to strict partial orders If reflexivity is replaced with irreflexivity (while keeping transitivity) then we get the definition of a strict partial order on . For this reason, the term is sometimes used for a strict partial order. That is, this is a binary relation on that satisfies: Irreflexivity or anti-reflexivity: for all that is, is for all and Transitivity: if for all Strict partial order induced by a preorder Any preorder gives rise to a strict partial order defined by if and only if and not . Using the equivalence relation introduced above, if and only if and so the following holds The relation is a strict partial order and strict partial order can be constructed this way. the preorder is antisymmetric (and thus a partial order) then the equivalence is equality (that is, if and only if ) and so in this case, the definition of can be restated as: But importantly, this new condition is used as (nor is it equivalent to) the general definition of the relation (that is, is defined as: if and only if ) because if the preorder is not antisymmetric then the resulting relation would not be transitive (consider how equivalent non-equal elements relate). This is the reason for using the symbol "" instead of the "less than or equal to" symbol "", which might cause confusion for a preorder that is not antisymmetric since it might misleadingly suggest that implies Preorders induced by a strict partial order Using the construction above, multiple non-strict preorders can produce the same strict preorder so without more information about how was constructed (such knowledge of the equivalence relation for instance), it might not be possible to reconstruct the original non-strict preorder from Possible (non-strict) preorders that induce the given strict preorder include the following: Define as (that is, take the reflexive closure of the relation). This gives the partial order associated with the strict partial order "" through reflexive closure; in this case the equivalence is equality so the symbols and are not needed. Define as "" (that is, take the inverse complement of the relation), which corresponds to defining as "neither "; these relations and are in general not transitive; however, if they are then is an equivalence; in that case "" is a strict weak order. The resulting preorder is connected (formerly called total); that is, a total preorder. If then The converse holds (that is, ) if and only if whenever then or Examples Graph theory The reachability relationship in any directed graph (possibly containing cycles) gives rise to a preorder, where in the preorder if and only if there is a path from x to y in the directed graph. Conversely, every preorder is the reachability relationship of a directed graph (for instance, the graph that has an edge from x to y for every pair with ). However, many different graphs may have the same reachability preorder as each other. In the same way, reachability of directed acyclic graphs, directed graphs with no cycles, gives rise to partially ordered sets (preorders satisfying an additional antisymmetry property). The graph-minor relation is also a preorder. Computer science In computer science, one can find examples of the following preorders. Asymptotic order causes a preorder over functions . The corresponding equivalence relation is called asymptotic equivalence. Polynomial-time, many-one (mapping) and Turing reductions are preorders on complexity classes. Subtyping relations are usually preorders. Simulation preorders are preorders (hence the name). Reduction relations in abstract rewriting systems. The encompassment preorder on the set of terms, defined by if a subterm of t is a substitution instance of s. Theta-subsumption, which is when the literals in a disjunctive first-order formula are contained by another, after applying a substitution to the former. Category theory A category with at most one morphism from any object x to any other object y is a preorder. Such categories are called thin. Here the objects correspond to the elements of and there is one morphism for objects which are related, zero otherwise. In this sense, categories "generalize" preorders by allowing more than one relation between objects: each morphism is a distinct (named) preorder relation. Alternately, a preordered set can be understood as an enriched category, enriched over the category Other Further examples: Every finite topological space gives rise to a preorder on its points by defining if and only if x belongs to every neighborhood of y. Every finite preorder can be formed as the specialization preorder of a topological space in this way. That is, there is a one-to-one correspondence between finite topologies and finite preorders. However, the relation between infinite topological spaces and their specialization preorders is not one-to-one. A net is a directed preorder, that is, each pair of elements has an upper bound. The definition of convergence via nets is important in topology, where preorders cannot be replaced by partially ordered sets without losing important features. The relation defined by if where f is a function into some preorder. The relation defined by if there exists some injection from x to y. Injection may be replaced by surjection, or any type of structure-preserving function, such as ring homomorphism, or permutation. The embedding relation for countable total orderings. Example of a total preorder: Preference, according to common models. Constructions Every binary relation on a set can be extended to a preorder on by taking the transitive closure and reflexive closure, The transitive closure indicates path connection in if and only if there is an -path from to Left residual preorder induced by a binary relation Given a binary relation the complemented composition forms a preorder called the left residual, where denotes the converse relation of and denotes the complement relation of while denotes relation composition. Related definitions If a preorder is also antisymmetric, that is, and implies then it is a partial order. On the other hand, if it is symmetric, that is, if implies then it is an equivalence relation. A preorder is total if or for all A preordered class is a class equipped with a preorder. Every set is a class and so every preordered set is a preordered class. Uses Preorders play a pivotal role in several situations: Every preorder can be given a topology, the Alexandrov topology; and indeed, every preorder on a set is in one-to-one correspondence with an Alexandrov topology on that set. Preorders may be used to define interior algebras. Preorders provide the Kripke semantics for certain types of modal logic. Preorders are used in forcing in set theory to prove consistency and independence results. Number of preorders As explained above, there is a 1-to-1 correspondence between preorders and pairs (partition, partial order). Thus the number of preorders is the sum of the number of partial orders on every partition. For example: Interval For the interval is the set of points x satisfying and also written It contains at least the points a and b. One may choose to extend the definition to all pairs The extra intervals are all empty. Using the corresponding strict relation "", one can also define the interval as the set of points x satisfying and also written An open interval may be empty even if Also and can be defined similarly.
Mathematics
Order theory
null
23593
https://en.wikipedia.org/wiki/Pediatrics
Pediatrics
Pediatrics (American English) also spelled paediatrics (British English), is the branch of medicine that involves the medical care of infants, children, adolescents, and young adults. In the United Kingdom, pediatrics covers many of their youth until the age of 18. The American Academy of Pediatrics recommends people seek pediatric care through the age of 21, but some pediatric subspecialists continue to care for adults up to 25. Worldwide age limits of pediatrics have been trending upward year after year. A medical doctor who specializes in this area is known as a pediatrician, or paediatrician. The word pediatrics and its cognates mean "healer of children", derived from the two Greek words: (pais "child") and (iatros "doctor, healer"). Pediatricians work in clinics, research centers, universities, general hospitals and children's hospitals, including those who practice pediatric subspecialties (e.g. neonatology requires resources available in a NICU). History The earliest mentions of child-specific medical problems appear in the Hippocratic Corpus, published in the fifth century B.C., and the famous Sacred Disease. These publications discussed topics such as childhood epilepsy and premature births. From the first to fourth centuries A.D., Greek philosophers and physicians Celsus, Soranus of Ephesus, Aretaeus, Galen, and Oribasius, also discussed specific illnesses affecting children in their works, such as rashes, epilepsy, and meningitis. Already Hippocrates, Aristotle, Celsus, Soranus, and Galen understood the differences in growing and maturing organisms that necessitated different treatment: ("In general, boys should not be treated in the same way as men"). Some of the oldest traces of pediatrics can be discovered in Ancient India where children's doctors were called kumara bhrtya. Even though some pediatric works existed during this time, they were scarce and rarely published due to a lack of knowledge in pediatric medicine. Sushruta Samhita, an ayurvedic text composed during the sixth century BCE, contains the text about pediatrics. Another ayurvedic text from this period is Kashyapa Samhita. A second century AD manuscript by the Greek physician and gynecologist Soranus of Ephesus dealt with neonatal pediatrics. Byzantine physicians Oribasius, Aëtius of Amida, Alexander Trallianus, and Paulus Aegineta contributed to the field. The Byzantines also built brephotrophia (crêches). Islamic Golden Age writers served as a bridge for Greco-Roman and Byzantine medicine and added ideas of their own, especially Haly Abbas, Yahya Serapion, Abulcasis, Avicenna, and Averroes. The Persian philosopher and physician al-Razi (865–925), sometimes called the father of pediatrics, published a monograph on pediatrics titled Diseases in Children. Also among the first books about pediatrics was Libellus [Opusculum] de aegritudinibus et remediis infantium 1472 ("Little Book on Children Diseases and Treatment"), by the Italian pediatrician Paolo Bagellardo. In sequence came Bartholomäus Metlinger's Ein Regiment der Jungerkinder 1473, Cornelius Roelans (1450–1525) no title Buchlein, or Latin compendium, 1483, and Heinrich von Louffenburg (1391–1460) Versehung des Leibs written in 1429 (published 1491), together form the Pediatric Incunabula, four great medical treatises on children's physiology and pathology. While more information about childhood diseases became available, there was little evidence that children received the same kind of medical care that adults did. It was during the seventeenth and eighteenth centuries that medical experts started offering specialized care for children. The Swedish physician Nils Rosén von Rosenstein (1706–1773) is considered to be the founder of modern pediatrics as a medical specialty, while his work The diseases of children, and their remedies (1764) is considered to be "the first modern textbook on the subject". However, it was not until the nineteenth century that medical professionals acknowledged pediatrics as a separate field of medicine. The first pediatric-specific publications appeared between the 1790s and the 1920s. Etymology The term pediatrics was first introduced in English in 1859 by Abraham Jacobi. In 1860, he became "the first dedicated professor of pediatrics in the world." Jacobi is known as the father of American pediatrics because of his many contributions to the field. He received his medical training in Germany and later practiced in New York City. The first generally accepted pediatric hospital is the Hôpital des Enfants Malades (), which opened in Paris in June 1802 on the site of a previous orphanage. From its beginning, this famous hospital accepted patients up to the age of fifteen years, and it continues to this day as the pediatric division of the Necker-Enfants Malades Hospital, created in 1920 by merging with the nearby Necker Hospital, founded in 1778. In other European countries, the Charité (a hospital founded in 1710) in Berlin established a separate Pediatric Pavilion in 1830, followed by similar institutions at Saint Petersburg in 1834, and at Vienna and Breslau (now Wrocław), both in 1837. In 1852 Britain's first pediatric hospital, the Hospital for Sick Children, Great Ormond Street was founded by Charles West. The first Children's hospital in Scotland opened in 1860 in Edinburgh. In the US, the first similar institutions were the Children's Hospital of Philadelphia, which opened in 1855, and then Boston Children's Hospital (1869). Subspecialties in pediatrics were created at the Harriet Lane Home at Johns Hopkins by Edwards A. Park. Differences between adult and pediatric medicine The body size differences are paralleled by maturation changes. The smaller body of an infant or neonate is substantially different physiologically from that of an adult. Congenital defects, genetic variance, and developmental issues are of greater concern to pediatricians than they often are to adult physicians. A common adage is that children are not simply "little adults". The clinician must take into account the immature physiology of the infant or child when considering symptoms, prescribing medications, and diagnosing illnesses. Pediatric physiology directly impacts the pharmacokinetic properties of drugs that enter the body. The absorption, distribution, metabolism, and elimination of medications differ between developing children and grown adults. Despite completed studies and reviews, continual research is needed to better understand how these factors should affect the decisions of healthcare providers when prescribing and administering medications to the pediatric population. Absorption Many drug absorption differences between pediatric and adult populations revolve around the stomach. Neonates and young infants have increased stomach pH due to decreased acid secretion, thereby creating a more basic environment for drugs that are taken by mouth. Acid is essential to degrading certain oral drugs before systemic absorption. Therefore, the absorption of these drugs in children is greater than in adults due to decreased breakdown and increased preservation in a less acidic gastric space. Children also have an extended rate of gastric emptying, which slows the rate of drug absorption. Drug absorption also depends on specific enzymes that come in contact with the oral drug as it travels through the body. Supply of these enzymes increase as children continue to develop their gastrointestinal tract. Pediatric patients have underdeveloped proteins, which leads to decreased metabolism and increased serum concentrations of specific drugs. However, prodrugs experience the opposite effect because enzymes are necessary for allowing their active form to enter systemic circulation. Distribution Percentage of total body water and extracellular fluid volume both decrease as children grow and develop with time. Pediatric patients thus have a larger volume of distribution than adults, which directly affects the dosing of hydrophilic drugs such as beta-lactam antibiotics like ampicillin. Thus, these drugs are administered at greater weight-based doses or with adjusted dosing intervals in children to account for this key difference in body composition. Infants and neonates also have fewer plasma proteins. Thus, highly protein-bound drugs have fewer opportunities for protein binding, leading to increased distribution. Metabolism Drug metabolism primarily occurs via enzymes in the liver and can vary according to which specific enzymes are affected in a specific stage of development. Phase I and Phase II enzymes have different rates of maturation and development, depending on their specific mechanism of action (i.e. oxidation, hydrolysis, acetylation, methylation, etc.). Enzyme capacity, clearance, and half-life are all factors that contribute to metabolism differences between children and adults. Drug metabolism can even differ within the pediatric population, separating neonates and infants from young children. Elimination Drug elimination is primarily facilitated via the liver and kidneys. In infants and young children, the larger relative size of their kidneys leads to increased renal clearance of medications that are eliminated through urine. In preterm neonates and infants, their kidneys are slower to mature and thus are unable to clear as much drug as fully developed kidneys. This can cause unwanted drug build-up, which is why it is important to consider lower doses and greater dosing intervals for this population. Diseases that negatively affect kidney function can also have the same effect and thus warrant similar considerations. Pediatric autonomy in healthcare A major difference between the practice of pediatric and adult medicine is that children, in most jurisdictions and with certain exceptions, cannot make decisions for themselves. The issues of guardianship, privacy, legal responsibility, and informed consent must always be considered in every pediatric procedure. Pediatricians often have to treat the parents and sometimes, the family, rather than just the child. Adolescents are in their own legal class, having rights to their own health care decisions in certain circumstances. The concept of legal consent combined with the non-legal consent (assent) of the child when considering treatment options, especially in the face of conditions with poor prognosis or complicated and painful procedures/surgeries, means the pediatrician must take into account the desires of many people, in addition to those of the patient. History of pediatric autonomy The term autonomy is traceable to ethical theory and law, where it states that autonomous individuals can make decisions based on their own logic. Hippocrates was the first to use the term in a medical setting. He created a code of ethics for doctors called the Hippocratic Oath that highlighted the importance of putting patients' interests first, making autonomy for patients a top priority in health care.   In ancient times, society did not view pediatric medicine as essential or scientific. Experts considered professional medicine unsuitable for treating children. Children also had no rights. Fathers regarded their children as property, so their children's health decisions were entrusted to them. As a result, mothers, midwives, "wise women", and general practitioners treated the children instead of doctors. Since mothers could not rely on professional medicine to take care of their children, they developed their own methods, such as using alkaline soda ash to remove the vernix at birth and treating teething pain with opium or wine. The absence of proper pediatric care, rights, and laws in health care to prioritize children's health led to many of their deaths. Ancient Greeks and Romans sometimes even killed healthy female babies and infants with deformities since they had no adequate medical treatment and no laws prohibiting infanticide. In the twentieth century, medical experts began to put more emphasis on children's rights. In 1989, in the United Nations Rights of the Child Convention, medical experts developed the Best Interest Standard of Child to prioritize children's rights and best interests. This event marked the onset of pediatric autonomy. In 1995, the American Academy of Pediatrics (AAP) finally acknowledged the Best Interest Standard of a Child as an ethical principle for pediatric decision-making, and it is still being used today. Parental authority and current medical issues The majority of the time, parents have the authority to decide what happens to their child. Philosopher John Locke argued that it is the responsibility of parents to raise their children and that God gave them this authority. In modern society, Jeffrey Blustein, modern philosopher and author of the book Parents and Children: The Ethics of Family, argues that parental authority is granted because the child requires parents to satisfy their needs. He believes that parental autonomy is more about parents providing good care for their children and treating them with respect than parents having rights. The researcher Kyriakos Martakis, MD, MSc, explains that research shows parental influence negatively affects children's ability to form autonomy. However, involving children in the decision-making process allows children to develop their cognitive skills and create their own opinions and, thus, decisions about their health. Parental authority affects the degree of autonomy the child patient has. As a result, in Argentina, the new National Civil and Commercial Code has enacted various changes to the healthcare system to encourage children and adolescents to develop autonomy. It has become more crucial to let children take accountability for their own health decisions. In most cases, the pediatrician, parent, and child work as a team to make the best possible medical decision. The pediatrician has the right to intervene for the child's welfare and seek advice from an ethics committee. However, in recent studies, authors have denied that complete autonomy is present in pediatric healthcare. The same moral standards should apply to children as they do to adults. In support of this idea is the concept of paternalism, which negates autonomy when it is in the patient's interests. This concept aims to keep the child's best interests in mind regarding autonomy. Pediatricians can interact with patients and help them make decisions that will benefit them, thus enhancing their autonomy. However, radical theories that question a child's moral worth continue to be debated today. Authors often question whether the treatment and equality of a child and an adult should be the same. Author Tamar Schapiro notes that children need nurturing and cannot exercise the same level of authority as adults. Hence, continuing the discussion on whether children are capable of making important health decisions until this day. Modern advancements According to the Subcommittee of Clinical Ethics of the Argentinean Pediatric Society (SAP), children can understand moral feelings at all ages and can make reasonable decisions based on those feelings. Therefore, children and teens are deemed capable of making their own health decisions when they reach the age of 13. Recently, studies made on the decision-making of children have challenged that age to be 12. Technology has made several modern advancements that contribute to the future development of child autonomy, for example, unsolicited findings (U.F.s) of pediatric exome sequencing. They are findings based on pediatric exome sequencing that explain in greater detail the intellectual disability of a child and predict to what extent it will affect the child in the future. Genetic and intellectual disorders in children make them incapable of making moral decisions, so people look down upon this kind of testing because the child's future autonomy is at risk. It is still in question whether parents should request these types of testing for their children. Medical experts argue that it could endanger the autonomous rights the child will possess in the future. However, the parents contend that genetic testing would benefit the welfare of their children since it would allow them to make better health care decisions. Exome sequencing for children and the decision to grant parents the right to request them is a medically ethical issue that many still debate today. Education requirements Aspiring medical students will need 4 years of undergraduate courses at a college or university, which will get them a BS, BA or other bachelor's degree. After completing college, future pediatricians will need to attend 4 years of medical school (MD/DO/MBBS) and later do 3 more years of residency training, the first year of which is called "internship." After completing the 3 years of residency, physicians are eligible to become certified in pediatrics by passing a rigorous test that deals with medical conditions related to young children. In high school, future pediatricians are required to take basic science classes such as biology, chemistry, physics, algebra, geometry, and calculus. It is also advisable to learn a foreign language (preferably Spanish in the United States) and be involved in high school organizations and extracurricular activities. After high school, college students simply need to fulfill the basic science course requirements that most medical schools recommend and will need to prepare to take the MCAT (Medical College Admission Test) in their junior or early senior year in college. Once attending medical school, student courses will focus on basic medical sciences like human anatomy, physiology, chemistry, etc., for the first three years, the second year of which is when medical students start to get hands-on experience with actual patients. Training of pediatricians The training of pediatricians varies considerably across the world. Depending on jurisdiction and university, a medical degree course may be either undergraduate-entry or graduate-entry. The former commonly takes five or six years and has been usual in the Commonwealth. Entrants to graduate-entry courses (as in the US), usually lasting four or five years, have previously completed a three- or four-year university degree, commonly but by no means always in sciences. Medical graduates hold a degree specific to the country and university in and from which they graduated. This degree qualifies that medical practitioner to become licensed or registered under the laws of that particular country, and sometimes of several countries, subject to requirements for "internship" or "conditional registration". Pediatricians must undertake further training in their chosen field. This may take from four to eleven or more years depending on jurisdiction and the degree of specialization. In the United States, a medical school graduate wishing to specialize in pediatrics must undergo a three-year residency composed of outpatient, inpatient, and critical care rotations. Subspecialties within pediatrics require further training in the form of 3-year fellowships. Subspecialties include critical care, gastroenterology, neurology, infectious disease, hematology/oncology, rheumatology, pulmonology, child abuse, emergency medicine, endocrinology, neonatology, and others. In most jurisdictions, entry-level degrees are common to all branches of the medical profession, but in some jurisdictions, specialization in pediatrics may begin before completion of this degree. In some jurisdictions, pediatric training is begun immediately following the completion of entry-level training. In other jurisdictions, junior medical doctors must undertake generalist (unstreamed) training for a number of years before commencing pediatric (or any other) specialization. Specialist training is often largely under the control of 'pediatric organizations (see below) rather than universities and depends on the jurisdiction. Subspecialties Subspecialties of pediatrics include: (not an exhaustive list) Addiction medicine (multidisciplinary) Adolescent medicine Child abuse pediatrics Clinical genetics Clinical informatics Developmental-behavioral pediatrics Headache medicine Hospital medicine Medical toxicology Metabolic medicine Neonatology/Perinatology Pain medicine (multidisciplinary) Palliative care (multidisciplinary) Pediatric allergy and immunology Pediatric cardiology Pediatric cardiac critical care Pediatric critical care Neurocritical care Pediatric cardiac critical care Pediatric emergency medicine Pediatric endocrinology Pediatric gastroenterology Transplant hepatology Pediatric hematology Pediatric infectious disease Pediatric nephrology Pediatric oncology Pediatric neuro-oncology Pediatric pulmonology Primary care Pediatric rheumatology Sleep medicine (multidisciplinary) Social pediatrics Sports medicine Other specialties that care for children (not an exhaustive list) Child neurology Addiction medicine (multidisciplinary) Brain injury medicine Clinical neurophysiology Epilepsy Headache medicine Neurocritical care Neuroimmunology Neuromuscular medicine Pain medicine (multidisciplinary) Palliative care (multidisciplinary) Pediatric neuro-oncology Sleep medicine (multidisciplinary) Child and adolescent psychiatry, subspecialty of psychiatry Neurodevelopmental disabilities Pediatric anesthesiology, subspecialty of anesthesiology Pediatric dentistry, subspecialty of dentistry Pediatric dermatology, subspecialty of dermatology Pediatric gynecology Pediatric neurosurgery, subspecialty of neurosurgery Pediatric ophthalmology, subspecialty of ophthalmology Pediatric orthopedic surgery, subspecialty of orthopedic surgery Pediatric otolaryngology, subspecialty of otolaryngology Pediatric plastic surgery, subspecialty of plastic surgery Pediatric radiology, subspecialty of radiology Pediatric rehabilitation medicine, subspecialty of physical medicine and rehabilitation Pediatric surgery, subspecialty of general surgery Pediatric urology, subspecialty of urology
Biology and health sciences
Fields of medicine
null
23597
https://en.wikipedia.org/wiki/Physiology
Physiology
Physiology (; ) is the scientific study of functions and mechanisms in a living system. As a subdiscipline of biology, physiology focuses on how organisms, organ systems, individual organs, cells, and biomolecules carry out chemical and physical functions in a living system. According to the classes of organisms, the field can be divided into medical physiology, animal physiology, plant physiology, cell physiology, and comparative physiology. Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions, including human diseases. The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine. Foundations Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines: Anatomy is the study of the structure and organization of living organisms, from the microscopic level of cells and tissues to the macroscopic level of organs and systems. Anatomical knowledge is important in physiology because the structure and function of an organism are often dictated by one another. Biochemistry is the study of the chemical processes and substances that occur within living organisms. Knowledge of biochemistry provides the foundation for understanding cellular and molecular processes that are essential to the functioning of organisms. Biophysics is the study of the physical properties of living organisms and their interactions with their environment. It helps to explain how organisms sense and respond to different stimuli, such as light, sound, and temperature, and how they maintain homeostasis, or a stable internal environment. Genetics is the study of heredity and the variation of traits within and between populations. It provides insights into the genetic basis of physiological processes and the ways in which genes interact with the environment to influence an organism's phenotype. Evolutionary biology is the study of the processes that have led to the diversity of life on Earth. It helps to explain the origin and adaptive significance of physiological processes and the ways in which organisms have evolved to cope with their environment. Subdisciplines There are many ways to categorize the subdisciplines of physiology: based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology) Subdisciplines by level of organisation Cell physiology Although there are differences between animal, plant, and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division, cell signaling, cell growth, and cell metabolism. Subdisciplines by taxa Plant physiology Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology, plant ecology, phytochemistry, cell biology, genetics, biophysics, and molecular biology. Fundamental processes of plant physiology include photosynthesis, respiration, plant nutrition, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, seed germination, dormancy, and stomata function and transpiration. Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology. Animal physiology Human physiology Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing. It seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical. Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. Change in behavior as a result of these substances is often used to assess the health of individuals. Much of the foundation of knowledge in human physiology was provided by animal experimentation. Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum. Subdisciplines by research objective Comparative physiology Involving evolutionary physiology and environmental physiology, comparative physiology considers the diversity of functional characteristics across organisms. History The classical era The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine. Early modern period Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature. In 1791 Luigi Galvani described the role of electricity in the nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law. In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils). In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased the death rate from surgery by a substantial amount. The Physiological Society was founded in London in 1876 as a dining club. The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences." In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli. In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann. It radically stated that organisms are made up of units called cells. Claude Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology. Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology. Late modern period In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline. In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated. In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory. Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains. Notable physiologists Women in physiology Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine. Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society." () Prominent women physiologists include: Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975. Gerty Cori, along with her husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation. Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize. Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes. Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system. Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS). Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase.
Biology and health sciences
Biology
null
23601
https://en.wikipedia.org/wiki/Pi
Pi
The number (; spelled out as "pi") is a mathematical constant, approximately equal to 3.14159, that is the ratio of a circle's circumference to its diameter. It appears in many formulae across mathematics and physics, and some of these formulae are commonly used for defining , to avoid relying on the definition of the length of a curve. The number is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an algebraic equation involving only finite sums, products, powers, and integers. The transcendence of implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of appear to be randomly distributed, but no proof of this conjecture has been found. For thousands of years, mathematicians have attempted to extend their understanding of , sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of for practical computations. Around 250BC, the Greek mathematician Archimedes created an algorithm to approximate with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for , based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706. The invention of calculus soon led to the calculation of hundreds of digits of , enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer scientists have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test supercomputers as well as stress testing consumer computer hardware. Because it relates to a circle, is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, fractals, thermodynamics, mechanics, and electromagnetism. It also appears in areas having little to do with geometry, such as number theory and statistics, and in modern mathematical analysis can be defined without any reference to geometry. The ubiquity of makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to have been published, and record-setting calculations of the digits of often result in news headlines. Fundamentals Name The symbol used by mathematicians to represent the ratio of a circle's circumference to its diameter is the lowercase Greek letter , sometimes spelled out as pi. In English, is pronounced as "pie" ( ). In mathematical use, the lowercase letter is distinguished from its capitalized and enlarged counterpart , which denotes a product of a sequence, analogous to how denotes summation. The choice of the symbol is discussed in the section Adoption of the symbol . Definition is commonly defined as the ratio of a circle's circumference to its diameter : The ratio is constant, regardless of the circle's size. For example, if a circle has twice the diameter of another circle, it will also have twice the circumference, preserving the ratio . This definition of implicitly makes use of flat (Euclidean) geometry; although the notion of a circle can be extended to any curve (non-Euclidean) geometry, these new circles will no longer satisfy the formula . Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be formally defined independently of geometry using limits—a concept in calculus. For example, one may directly compute the arc length of the top half of the unit circle, given in Cartesian coordinates by the equation , as the integral: An integral such as this was proposed as a definition of by Karl Weierstrass, who defined it directly as an integral in 1841. Integration is no longer commonly used in a first analytical definition because, as explains, differential calculus typically precedes integral calculus in the university curriculum, so it is desirable to have a definition of that does not rely on the latter. One such definition, due to Richard Baltzer and popularized by Edmund Landau, is the following: is twice the smallest positive number at which the cosine function equals 0. is also the smallest positive number at which the sine function equals zero, and the difference between consecutive zeroes of the sine function. The cosine and sine can be defined independently of geometry as a power series, or as the solution of a differential equation. In a similar spirit, can be defined using properties of the complex exponential, , of a complex variable . Like the cosine, the complex exponential can be defined in one of several ways. The set of complex numbers at which is equal to one is then an (imaginary) arithmetic progression of the form: and there is a unique positive real number with this property. A variation on the same idea, making use of sophisticated mathematical concepts of topology and algebra, is the following theorem: there is a unique (up to automorphism) continuous isomorphism from the group R/Z of real numbers under addition modulo integers (the circle group), onto the multiplicative group of complex numbers of absolute value one. The number is then defined as half the magnitude of the derivative of this homomorphism. Irrationality and normality is an irrational number, meaning that it cannot be written as the ratio of two integers. Fractions such as and are commonly used to approximate , but no common fraction (ratio of whole numbers) can be its exact value. Because is irrational, it has an infinite number of digits in its decimal representation, and does not settle into an infinitely repeating pattern of digits. There are several proofs that is irrational; they generally require calculus and rely on the reductio ad absurdum technique. The degree to which can be approximated by rational numbers (called the irrationality measure) is not precisely known; estimates have established that the irrationality measure is larger or at least equal to the measure of but smaller than the measure of Liouville numbers. The digits of have no apparent pattern and have passed tests for statistical randomness, including tests for normality; a number of infinite length is called normal when all possible sequences of digits (of any given length) appear equally often. The conjecture that is normal has not been proven or disproven. Since the advent of computers, a large number of digits of have been available on which to perform statistical analysis. Yasumasa Kanada has performed detailed statistical analyses on the decimal digits of , and found them consistent with normality; for example, the frequencies of the ten digits 0 to 9 were subjected to statistical significance tests, and no evidence of a pattern was found. Any random sequence of digits contains arbitrarily long subsequences that appear non-random, by the infinite monkey theorem. Thus, because the sequence of 's digits passes statistical tests for randomness, it contains some sequences of digits that may appear non-random, such as a sequence of six consecutive 9s that begins at the 762nd decimal place of the decimal representation of . This is also called the "Feynman point" in mathematical folklore, after Richard Feynman, although no connection to Feynman is known. Transcendence In addition to being irrational, is also a transcendental number, which means that it is not the solution of any non-constant polynomial equation with rational coefficients, such as . This follows from the so-called Lindemann–Weierstrass theorem, which also establishes the transcendence of the constant . The transcendence of has two important consequences: First, cannot be expressed using any finite combination of rational numbers and square roots or n-th roots (such as or ). Second, since no transcendental number can be constructed with compass and straightedge, it is not possible to "square the circle". In other words, it is impossible to construct, using compass and straightedge alone, a square whose area is exactly equal to the area of a given circle. Squaring a circle was one of the important geometry problems of the classical antiquity. Amateur mathematicians in modern times have sometimes attempted to square the circle and claim success—despite the fact that it is mathematically impossible. An unsolved problem thus far is the question of whether or not the numbers and are algebraically independent ("relatively transcendental"). This would be resolved by Schanuel's conjecture – a currently unproven generalization of the Lindemann–Weierstrass theorem. Continued fractions As an irrational number, cannot be represented as a common fraction. But every number, including , can be represented by an infinite series of nested fractions, called a simple continued fraction: Truncating the continued fraction at any point yields a rational approximation for ; the first four of these are , , , and . These numbers are among the best-known and most widely used historical approximations of the constant. Each approximation generated in this way is a best rational approximation; that is, each is closer to than any other fraction with the same or a smaller denominator. Because is transcendental, it is by definition not algebraic and so cannot be a quadratic irrational. Therefore, cannot have a periodic continued fraction. Although the simple continued fraction for (with numerators all 1, shown above) also does not exhibit any other obvious pattern, several non-simple continued fractions do, such as: The middle of these is due to the mid-17th century mathematician William Brouncker, see § Brouncker's formula. Approximate value and digits Some approximations of pi include: Integers: 3 Fractions: Approximate fractions include (in order of increasing accuracy) , , , , , , and . (List is selected terms from and .) Digits: The first 50 decimal digits are (see ) Digits in other number systems The first 48 binary (base 2) digits (called bits) are (see ) The first 36 digits in ternary (base 3) are (see ) The first 20 digits in hexadecimal (base 16) are (see ) The first five sexagesimal (base 60) digits are 3;8,29,44,0,47 (see ) Complex numbers and Euler's identity Any complex number, say , can be expressed using a pair of real numbers. In the polar coordinate system, one number (radius or ) is used to represent 's distance from the origin of the complex plane, and the other (angle or ) the counter-clockwise rotation from the positive real line: where is the imaginary unit satisfying . The frequent appearance of in complex analysis can be related to the behaviour of the exponential function of a complex variable, described by Euler's formula: where the constant is the base of the natural logarithm. This formula establishes a correspondence between imaginary powers of and points on the unit circle centred at the origin of the complex plane. Setting in Euler's formula results in Euler's identity, celebrated in mathematics due to it containing five important mathematical constants: There are different complex numbers satisfying , and these are called the "-th roots of unity" and are given by the formula: History Antiquity The best-known approximations to dating before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period. The earliest written approximations of are found in Babylon and Egypt, both within one percent of the true value. In Babylon, a clay tablet dated 1900–1600 BC has a geometrical statement that, by implication, treats as  = 3.125. In Egypt, the Rhind Papyrus, dated around 1650 BC but copied from a document dated to 1850 BC, has a formula for the area of a circle that treats as . Although some pyramidologists have theorized that the Great Pyramid of Giza was built with proportions related to , this theory is not widely accepted by scholars. In the Shulba Sutras of Indian mathematics, dating to an oral tradition from the first or second millennium BC, approximations are given which have been variously interpreted as approximately 3.08831, 3.08833, 3.004, 3, or 3.125. Polygon approximation era The first recorded algorithm for rigorously calculating the value of was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes, implementing the method of exhaustion. This polygonal algorithm dominated for over 1,000 years, and as a result is sometimes referred to as Archimedes's constant. Archimedes computed upper and lower bounds of by drawing a regular hexagon inside and outside a circle, and successively doubling the number of sides until he reached a 96-sided regular polygon. By calculating the perimeters of these polygons, he proved that (that is, ). Archimedes' upper bound of may have led to a widespread popular belief that is equal to . Around 150 AD, Greek-Roman scientist Ptolemy, in his Almagest, gave a value for of 3.1416, which he may have obtained from Archimedes or from Apollonius of Perga. Mathematicians using polygonal algorithms reached 39 digits of in 1630, a record only broken in 1699 when infinite series were used to reach 71 digits. In ancient China, values for included 3.1547 (around 1 AD), (100 AD, approximately 3.1623), and (3rd century, approximately 3.1556). Around 265 AD, the Wei Kingdom mathematician Liu Hui created a polygon-based iterative algorithm and used it with a 3,072-sided polygon to obtain a value of of 3.1416. Liu later invented a faster method of calculating and obtained a value of 3.14 with a 96-sided polygon, by taking advantage of the fact that the differences in area of successive polygons form a geometric series with a factor of 4. The Chinese mathematician Zu Chongzhi, around 480 AD, calculated that and suggested the approximations and , which he termed the Milü (''close ratio") and Yuelü ("approximate ratio"), respectively, using Liu Hui's algorithm applied to a 12,288-sided polygon. With a correct value for its seven first decimal digits, this value remained the most accurate approximation of available for the next 800 years. The Indian astronomer Aryabhata used a value of 3.1416 in his Āryabhaṭīya (499 AD). Fibonacci in computed 3.1418 using a polygonal method, independent of Archimedes. Italian author Dante apparently employed the value . The Persian astronomer Jamshīd al-Kāshī produced nine sexagesimal digits, roughly the equivalent of 16 decimal digits, in 1424, using a polygon with sides, which stood as the world record for about 180 years. French mathematician François Viète in 1579 achieved nine digits with a polygon of sides. Flemish mathematician Adriaan van Roomen arrived at 15 decimal places in 1593. In 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits, a record he later increased to 35 digits (as a result, was called the "Ludolphian number" in Germany until the early 20th century). Dutch scientist Willebrord Snellius reached 34 digits in 1621, and Austrian astronomer Christoph Grienberger arrived at 38 digits in 1630 using 1040 sides. Christiaan Huygens was able to arrive at 10 decimal places in 1654 using a slightly different method equivalent to Richardson extrapolation. Infinite series The calculation of was revolutionized by the development of infinite series techniques in the 16th and 17th centuries. An infinite series is the sum of the terms of an infinite sequence. Infinite series allowed mathematicians to compute with much greater precision than Archimedes and others who used geometrical techniques. Although infinite series were exploited for most notably by European mathematicians such as James Gregory and Gottfried Wilhelm Leibniz, the approach also appeared in the Kerala school sometime in the 14th or 15th century. Around 1500 AD, a written description of an infinite series that could be used to compute was laid out in Sanskrit verse in Tantrasamgraha by Nilakantha Somayaji. The series are presented without proof, but proofs are presented in a later work, Yuktibhāṣā, from around 1530 AD. Several infinite series are described, including series for sine (which Nilakantha attributes to Madhava of Sangamagrama), cosine, and arctangent which are now sometimes referred to as Madhava series. The series for arctangent is sometimes called Gregory's series or the Gregory–Leibniz series. Madhava used infinite series to estimate to 11 digits around 1400. In 1593, François Viète published what is now known as Viète's formula, an infinite product (rather than an infinite sum, which is more typically used in calculations): In 1655, John Wallis published what is now known as Wallis product, also an infinite product: In the 1660s, the English scientist Isaac Newton and German mathematician Gottfried Wilhelm Leibniz discovered calculus, which led to the development of many infinite series for approximating . Newton himself used an arcsine series to compute a 15-digit approximation of in 1665 or 1666, writing, "I am ashamed to tell you to how many figures I carried these computations, having no other business at the time." In 1671, James Gregory, and independently, Leibniz in 1673, discovered the Taylor series expansion for arctangent: This series, sometimes called the Gregory–Leibniz series, equals when evaluated with . But for , it converges impractically slowly (that is, approaches the answer very gradually), taking about ten times as many terms to calculate each additional digit. In 1699, English mathematician Abraham Sharp used the Gregory–Leibniz series for to compute to 71 digits, breaking the previous record of 39 digits, which was set with a polygonal algorithm. In 1706, John Machin used the Gregory–Leibniz series to produce an algorithm that converged much faster: Machin reached 100 digits of with this formula. Other mathematicians created variants, now known as Machin-like formulae, that were used to set several successive records for calculating digits of . Isaac Newton accelerated the convergence of the Gregory–Leibniz series in 1684 (in an unpublished work; others independently discovered the result): Leonhard Euler popularized this series in his 1755 differential calculus textbook, and later used it with Machin-like formulae, including with which he computed 20 digits of in one hour. Machin-like formulae remained the best-known method for calculating well into the age of computers, and were used to set records for 250 years, culminating in a 620-digit approximation in 1946 by Daniel Ferguson – the best approximation achieved without the aid of a calculating device. In 1844, a record was set by Zacharias Dase, who employed a Machin-like formula to calculate 200 decimals of in his head at the behest of German mathematician Carl Friedrich Gauss. In 1853, British mathematician William Shanks calculated to 607 digits, but made a mistake in the 528th digit, rendering all subsequent digits incorrect. Though he calculated an additional 100 digits in 1873, bringing the total up to 707, his previous mistake rendered all the new digits incorrect as well. Rate of convergence Some infinite series for converge faster than others. Given the choice of two infinite series for , mathematicians will generally use the one that converges more rapidly because faster convergence reduces the amount of computation needed to calculate to any given accuracy. A simple infinite series for is the Gregory–Leibniz series: As individual terms of this infinite series are added to the sum, the total gradually gets closer to , and – with a sufficient number of terms – can get as close to as desired. It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of . An infinite series for (published by Nilakantha in the 15th century) that converges more rapidly than the Gregory–Leibniz series is: The following table compares the convergence rates of these two series: After five terms, the sum of the Gregory–Leibniz series is within 0.2 of the correct value of , whereas the sum of Nilakantha's series is within 0.002 of the correct value. Nilakantha's series converges faster and is more useful for computing digits of . Series that converge even faster include Machin's series and Chudnovsky's series, the latter producing 14 correct decimal digits per term. Irrationality and transcendence Not all mathematical advances relating to were aimed at increasing the accuracy of approximations. When Euler solved the Basel problem in 1735, finding the exact value of the sum of the reciprocal squares, he established a connection between and the prime numbers that later contributed to the development and study of the Riemann zeta function: Swiss scientist Johann Heinrich Lambert in 1768 proved that is irrational, meaning it is not equal to the quotient of any two integers. Lambert's proof exploited a continued-fraction representation of the tangent function. French mathematician Adrien-Marie Legendre proved in 1794 that 2 is also irrational. In 1882, German mathematician Ferdinand von Lindemann proved that is transcendental, confirming a conjecture made by both Legendre and Euler. Hardy and Wright states that "the proofs were afterwards modified and simplified by Hilbert, Hurwitz, and other writers". Adoption of the symbol In the earliest usages, the Greek letter was used to denote the semiperimeter (semiperipheria in Latin) of a circle and was combined in ratios with (for diameter or semidiameter) or (for radius) to form circle constants. (Before then, mathematicians sometimes used letters such as or instead.) The first recorded use is Oughtred's , to express the ratio of periphery and diameter in the 1647 and later editions of . Barrow likewise used to represent the constant , while Gregory instead used to represent . The earliest known use of the Greek letter alone to represent the ratio of a circle's circumference to its diameter was by Welsh mathematician William Jones in his 1706 work ; or, a New Introduction to the Mathematics. The Greek letter appears on p. 243 in the phrase " Periphery ()", calculated for a circle with radius one. However, Jones writes that his equations for are from the "ready pen of the truly ingenious Mr. John Machin", leading to speculation that Machin may have employed the Greek letter before Jones. Jones' notation was not immediately adopted by other mathematicians, with the fraction notation still being used as late as 1767. Euler started using the single-letter form beginning with his 1727 Essay Explaining the Properties of Air, though he used , the ratio of periphery to radius, in this and some later writing. Euler first used in his 1736 work Mechanica, and continued in his widely read 1748 work (he wrote: "for the sake of brevity we will write this number as ; thus is equal to half the circumference of a circle of radius "). Because Euler corresponded heavily with other mathematicians in Europe, the use of the Greek letter spread rapidly, and the practice was universally adopted thereafter in the Western world, though the definition still varied between and as late as 1761. Modern quest for more digits Computer era and iterative algorithms The development of computers in the mid-20th century again revolutionized the hunt for digits of . Mathematicians John Wrench and Levi Smith reached 1,120 digits in 1949 using a desk calculator. Using an inverse tangent (arctan) infinite series, a team led by George Reitwiesner and John von Neumann that same year achieved 2,037 digits with a calculation that took 70 hours of computer time on the ENIAC computer. The record, always relying on an arctan series, was broken repeatedly (3089 digits in 1955, 7,480 digits in 1957; 10,000 digits in 1958; 100,000 digits in 1961) until 1 million digits was reached in 1973. Two additional developments around 1980 once again accelerated the ability to compute . First, the discovery of new iterative algorithms for computing , which were much faster than the infinite series; and second, the invention of fast multiplication algorithms that could multiply large numbers very rapidly. Such algorithms are particularly important in modern computations because most of the computer's time is devoted to multiplication. They include the Karatsuba algorithm, Toom–Cook multiplication, and Fourier transform-based methods. The iterative algorithms were independently published in 1975–1976 by physicist Eugene Salamin and scientist Richard Brent. These avoid reliance on infinite series. An iterative algorithm repeats a specific calculation, each iteration using the outputs from prior steps as its inputs, and produces a result in each step that converges to the desired value. The approach was actually invented over 160 years earlier by Carl Friedrich Gauss, in what is now termed the arithmetic–geometric mean method (AGM method) or Gauss–Legendre algorithm. As modified by Salamin and Brent, it is also referred to as the Brent–Salamin algorithm. The iterative algorithms were widely used after 1980 because they are faster than infinite series algorithms: whereas infinite series typically increase the number of correct digits additively in successive terms, iterative algorithms generally multiply the number of correct digits at each step. For example, the Brent–Salamin algorithm doubles the number of digits in each iteration. In 1984, brothers John and Peter Borwein produced an iterative algorithm that quadruples the number of digits in each step; and in 1987, one that increases the number of digits five times in each step. Iterative methods were used by Japanese mathematician Yasumasa Kanada to set several records for computing between 1995 and 2002. This rapid convergence comes at a price: the iterative algorithms require significantly more memory than infinite series. Motives for computing For most numerical calculations involving , a handful of digits provide sufficient precision. According to Jörg Arndt and Christoph Haenel, thirty-nine digits are sufficient to perform most cosmological calculations, because that is the accuracy necessary to calculate the circumference of the observable universe with a precision of one atom. Accounting for additional digits needed to compensate for computational round-off errors, Arndt concludes that a few hundred digits would suffice for any scientific application. Despite this, people have worked strenuously to compute to thousands and millions of digits. This effort may be partly ascribed to the human compulsion to break records, and such achievements with often make headlines around the world. They also have practical benefits, such as testing supercomputers, testing numerical analysis algorithms (including high-precision multiplication algorithms); and within pure mathematics itself, providing data for evaluating the randomness of the digits of . Rapidly convergent series Modern calculators do not use iterative algorithms exclusively. New infinite series were discovered in the 1980s and 1990s that are as fast as iterative algorithms, yet are simpler and less memory intensive. The fast iterative algorithms were anticipated in 1914, when Indian mathematician Srinivasa Ramanujan published dozens of innovative new formulae for , remarkable for their elegance, mathematical depth and rapid convergence. One of his formulae, based on modular equations, is This series converges much more rapidly than most arctan series, including Machin's formula. Bill Gosper was the first to use it for advances in the calculation of , setting a record of 17 million digits in 1985. Ramanujan's formulae anticipated the modern algorithms developed by the Borwein brothers (Jonathan and Peter) and the Chudnovsky brothers. The Chudnovsky formula developed in 1987 is It produces about 14 digits of per term and has been used for several record-setting calculations, including the first to surpass 1 billion (109) digits in 1989 by the Chudnovsky brothers, 10 trillion (1013) digits in 2011 by Alexander Yee and Shigeru Kondo, and 100 trillion digits by Emma Haruka Iwao in 2022. For similar formulae, see also the Ramanujan–Sato series. In 2006, mathematician Simon Plouffe used the PSLQ integer relation algorithm to generate several new formulae for , conforming to the following template: where is (Gelfond's constant), is an odd number, and are certain rational numbers that Plouffe computed. Monte Carlo methods Monte Carlo methods, which evaluate the results of multiple random trials, can be used to create approximations of . Buffon's needle is one such technique: If a needle of length is dropped times on a surface on which parallel lines are drawn units apart, and if of those times it comes to rest crossing a line ( > 0), then one may approximate based on the counts: Another Monte Carlo method for computing is to draw a circle inscribed in a square, and randomly place dots in the square. The ratio of dots inside the circle to the total number of dots will approximately equal . Another way to calculate using probability is to start with a random walk, generated by a sequence of (fair) coin tosses: independent random variables such that with equal probabilities. The associated random walk is so that, for each , is drawn from a shifted and scaled binomial distribution. As varies, defines a (discrete) stochastic process. Then can be calculated by This Monte Carlo method is independent of any relation to circles, and is a consequence of the central limit theorem, discussed below. These Monte Carlo methods for approximating are very slow compared to other methods, and do not provide any information on the exact number of digits that are obtained. Thus they are never used to approximate when speed or accuracy is desired. Spigot algorithms Two algorithms were discovered in 1995 that opened up new avenues of research into . They are called spigot algorithms because, like water dripping from a spigot, they produce single digits of that are not reused after they are calculated. This is in contrast to infinite series or iterative algorithms, which retain and use all intermediate digits until the final result is produced. Mathematicians Stan Wagon and Stanley Rabinowitz produced a simple spigot algorithm in 1995. Its speed is comparable to arctan algorithms, but not as fast as iterative algorithms. Another spigot algorithm, the BBP digit extraction algorithm, was discovered in 1995 by Simon Plouffe: This formula, unlike others before it, can produce any individual hexadecimal digit of without calculating all the preceding digits. Individual binary digits may be extracted from individual hexadecimal digits, and octal digits can be extracted from one or two hexadecimal digits. An important application of digit extraction algorithms is to validate new claims of record computations: After a new record is claimed, the decimal result is converted to hexadecimal, and then a digit extraction algorithm is used to calculate several randomly selected hexadecimal digits near the end; if they match, this provides a measure of confidence that the entire computation is correct. Between 1998 and 2000, the distributed computing project PiHex used Bellard's formula (a modification of the BBP algorithm) to compute the quadrillionth (1015th) bit of , which turned out to be 0. In September 2010, a Yahoo! employee used the company's Hadoop application on one thousand computers over a 23-day period to compute 256 bits of at the two-quadrillionth (2×1015th) bit, which also happens to be zero. In 2022, Plouffe found a base-10 algorithm for calculating digits of . Role and characterizations in mathematics Because is closely related to the circle, it is found in many formulae from the fields of geometry and trigonometry, particularly those concerning circles, spheres, or ellipses. Other branches of science, such as statistics, physics, Fourier analysis, and number theory, also include in some of their important formulae. Geometry and trigonometry appears in formulae for areas and volumes of geometrical shapes based on circles, such as ellipses, spheres, cones, and tori. Below are some of the more common formulae that involve . The circumference of a circle with radius is . The area of a circle with radius is . The area of an ellipse with semi-major axis and semi-minor axis is . The volume of a sphere with radius is . The surface area of a sphere with radius is . Some of the formulae above are special cases of the volume of the n-dimensional ball and the surface area of its boundary, the (n−1)-dimensional sphere, given below. Apart from circles, there are other curves of constant width. By Barbier's theorem, every curve of constant width has perimeter times its width. The Reuleaux triangle (formed by the intersection of three circles with the sides of an equilateral triangle as their radii) has the smallest possible area for its width and the circle the largest. There also exist non-circular smooth and even algebraic curves of constant width. Definite integrals that describe circumference, area, or volume of shapes generated by circles typically have values that involve . For example, an integral that specifies half the area of a circle of radius one is given by: In that integral, the function represents the height over the -axis of a semicircle (the square root is a consequence of the Pythagorean theorem), and the integral computes the area below the semicircle. The existence of such integrals makes an algebraic period. Units of angle The trigonometric functions rely on angles, and mathematicians generally use radians as units of measurement. plays an important role in angles measured in radians, which are defined so that a complete circle spans an angle of 2 radians. The angle measure of 180° is equal to radians, and . Common trigonometric functions have periods that are multiples of ; for example, sine and cosine have period 2, so for any angle and any integer , Eigenvalues Many of the appearances of in the formulae of mathematics and the sciences have to do with its close relationship with geometry. However, also appears in many natural situations having apparently nothing to do with geometry. In many applications, it plays a distinguished role as an eigenvalue. For example, an idealized vibrating string can be modelled as the graph of a function on the unit interval , with fixed ends . The modes of vibration of the string are solutions of the differential equation , or . Thus is an eigenvalue of the second derivative operator , and is constrained by Sturm–Liouville theory to take on only certain specific values. It must be positive, since the operator is negative definite, so it is convenient to write , where is called the wavenumber. Then satisfies the boundary conditions and the differential equation with . The value is, in fact, the least such value of the wavenumber, and is associated with the fundamental mode of vibration of the string. One way to show this is by estimating the energy, which satisfies Wirtinger's inequality: for a function with and , both square integrable, we have: with equality precisely when is a multiple of . Here appears as an optimal constant in Wirtinger's inequality, and it follows that it is the smallest wavenumber, using the variational characterization of the eigenvalue. As a consequence, is the smallest singular value of the derivative operator on the space of functions on vanishing at both endpoints (the Sobolev space ). Inequalities The number serves appears in similar eigenvalue problems in higher-dimensional analysis. As mentioned above, it can be characterized via its role as the best constant in the isoperimetric inequality: the area enclosed by a plane Jordan curve of perimeter satisfies the inequality and equality is clearly achieved for the circle, since in that case and . Ultimately, as a consequence of the isoperimetric inequality, appears in the optimal constant for the critical Sobolev inequality in n dimensions, which thus characterizes the role of in many physical phenomena as well, for example those of classical potential theory. In two dimensions, the critical Sobolev inequality is for f a smooth function with compact support in , is the gradient of f, and and refer respectively to the and -norm. The Sobolev inequality is equivalent to the isoperimetric inequality (in any dimension), with the same best constants. Wirtinger's inequality also generalizes to higher-dimensional Poincaré inequalities that provide best constants for the Dirichlet energy of an n-dimensional membrane. Specifically, is the greatest constant such that for all convex subsets of of diameter 1, and square-integrable functions u on of mean zero. Just as Wirtinger's inequality is the variational form of the Dirichlet eigenvalue problem in one dimension, the Poincaré inequality is the variational form of the Neumann eigenvalue problem, in any dimension. Fourier transform and Heisenberg uncertainty principle The constant also appears as a critical spectral parameter in the Fourier transform. This is the integral transform, that takes a complex-valued integrable function on the real line to the function defined as: Although there are several different conventions for the Fourier transform and its inverse, any such convention must involve somewhere. The above is the most canonical definition, however, giving the unique unitary operator on that is also an algebra homomorphism of to . The Heisenberg uncertainty principle also contains the number . The uncertainty principle gives a sharp lower bound on the extent to which it is possible to localize a function both in space and in frequency: with our conventions for the Fourier transform, The physical consequence, about the uncertainty in simultaneous position and momentum observations of a quantum mechanical system, is discussed below. The appearance of in the formulae of Fourier analysis is ultimately a consequence of the Stone–von Neumann theorem, asserting the uniqueness of the Schrödinger representation of the Heisenberg group. Gaussian integrals The fields of probability and statistics frequently use the normal distribution as a simple model for complex phenomena; for example, scientists generally assume that the observational error in most experiments follows a normal distribution. The Gaussian function, which is the probability density function of the normal distribution with mean and standard deviation , naturally contains : The factor of makes the area under the graph of equal to one, as is required for a probability distribution. This follows from a change of variables in the Gaussian integral: which says that the area under the basic bell curve in the figure is equal to the square root of . The central limit theorem explains the central role of normal distributions, and thus of , in probability and statistics. This theorem is ultimately connected with the spectral characterization of as the eigenvalue associated with the Heisenberg uncertainty principle, and the fact that equality holds in the uncertainty principle only for the Gaussian function. Equivalently, is the unique constant making the Gaussian normal distribution equal to its own Fourier transform. Indeed, according to , the "whole business" of establishing the fundamental theorems of Fourier analysis reduces to the Gaussian integral. Topology The constant appears in the Gauss–Bonnet formula which relates the differential geometry of surfaces to their topology. Specifically, if a compact surface has Gauss curvature K, then where is the Euler characteristic, which is an integer. An example is the surface area of a sphere S of curvature 1 (so that its radius of curvature, which coincides with its radius, is also 1.) The Euler characteristic of a sphere can be computed from its homology groups and is found to be equal to two. Thus we have reproducing the formula for the surface area of a sphere of radius 1. The constant appears in many other integral formulae in topology, in particular, those involving characteristic classes via the Chern–Weil homomorphism. Cauchy's integral formula One of the key tools in complex analysis is contour integration of a function over a positively oriented (rectifiable) Jordan curve . A form of Cauchy's integral formula states that if a point is interior to , then Although the curve is not a circle, and hence does not have any obvious connection to the constant , a standard proof of this result uses Morera's theorem, which implies that the integral is invariant under homotopy of the curve, so that it can be deformed to a circle and then integrated explicitly in polar coordinates. More generally, it is true that if a rectifiable closed curve does not contain , then the above integral is times the winding number of the curve. The general form of Cauchy's integral formula establishes the relationship between the values of a complex analytic function on the Jordan curve and the value of at any interior point of : provided is analytic in the region enclosed by and extends continuously to . Cauchy's integral formula is a special case of the residue theorem, that if is a meromorphic function the region enclosed by and is continuous in a neighbourhood of , then where the sum is of the residues at the poles of . Vector calculus and physics The constant is ubiquitous in vector calculus and potential theory, for example in Coulomb's law, Gauss's law, Maxwell's equations, and even the Einstein field equations. Perhaps the simplest example of this is the two-dimensional Newtonian potential, representing the potential of a point source at the origin, whose associated field has unit outward flux through any smooth and oriented closed surface enclosing the source: The factor of is necessary to ensure that is the fundamental solution of the Poisson equation in : where is the Dirac delta function. In higher dimensions, factors of are present because of a normalization by the n-dimensional volume of the unit n sphere. For example, in three dimensions, the Newtonian potential is: which has the 2-dimensional volume (i.e., the area) of the unit 2-sphere in the denominator. Total curvature The gamma function and Stirling's approximation The factorial function is the product of all of the positive integers through . The gamma function extends the concept of factorial (normally defined only for non-negative integers) to all complex numbers, except the negative real integers, with the identity . When the gamma function is evaluated at half-integers, the result contains . For example, and . The gamma function is defined by its Weierstrass product development: where is the Euler–Mascheroni constant. Evaluated at and squared, the equation reduces to the Wallis product formula. The gamma function is also connected to the Riemann zeta function and identities for the functional determinant, in which the constant plays an important role. The gamma function is used to calculate the volume of the n-dimensional ball of radius r in Euclidean n-dimensional space, and the surface area of its boundary, the (n−1)-dimensional sphere: Further, it follows from the functional equation that The gamma function can be used to create a simple approximation to the factorial function for large : which is known as Stirling's approximation. Equivalently, As a geometrical application of Stirling's approximation, let denote the standard simplex in n-dimensional Euclidean space, and denote the simplex having all of its sides scaled up by a factor of . Then Ehrhart's volume conjecture is that this is the (optimal) upper bound on the volume of a convex body containing only one lattice point. Number theory and Riemann zeta function The Riemann zeta function is used in many areas of mathematics. When evaluated at it can be written as Finding a simple solution for this infinite series was a famous problem in mathematics called the Basel problem. Leonhard Euler solved it in 1735 when he showed it was equal to . Euler's result leads to the number theory result that the probability of two random numbers being relatively prime (that is, having no shared factors) is equal to . This probability is based on the observation that the probability that any number is divisible by a prime is (for example, every 7th integer is divisible by 7.) Hence the probability that two numbers are both divisible by this prime is , and the probability that at least one of them is not is . For distinct primes, these divisibility events are mutually independent; so the probability that two numbers are relatively prime is given by a product over all primes: This probability can be used in conjunction with a random number generator to approximate using a Monte Carlo approach. The solution to the Basel problem implies that the geometrically derived quantity is connected in a deep way to the distribution of prime numbers. This is a special case of Weil's conjecture on Tamagawa numbers, which asserts the equality of similar such infinite products of arithmetic quantities, localized at each prime p, and a geometrical quantity: the reciprocal of the volume of a certain locally symmetric space. In the case of the Basel problem, it is the hyperbolic 3-manifold . The zeta function also satisfies Riemann's functional equation, which involves as well as the gamma function: Furthermore, the derivative of the zeta function satisfies A consequence is that can be obtained from the functional determinant of the harmonic oscillator. This functional determinant can be computed via a product expansion, and is equivalent to the Wallis product formula. The calculation can be recast in quantum mechanics, specifically the variational approach to the spectrum of the hydrogen atom. Fourier series The constant also appears naturally in Fourier series of periodic functions. Periodic functions are functions on the group of fractional parts of real numbers. The Fourier decomposition shows that a complex-valued function on can be written as an infinite linear superposition of unitary characters of . That is, continuous group homomorphisms from to the circle group of unit modulus complex numbers. It is a theorem that every character of is one of the complex exponentials . There is a unique character on , up to complex conjugation, that is a group isomorphism. Using the Haar measure on the circle group, the constant is half the magnitude of the Radon–Nikodym derivative of this character. The other characters have derivatives whose magnitudes are positive integral multiples of 2. As a result, the constant is the unique number such that the group T, equipped with its Haar measure, is Pontrjagin dual to the lattice of integral multiples of 2. This is a version of the one-dimensional Poisson summation formula. Modular forms and theta functions The constant is connected in a deep way with the theory of modular forms and theta functions. For example, the Chudnovsky algorithm involves in an essential way the j-invariant of an elliptic curve. Modular forms are holomorphic functions in the upper half plane characterized by their transformation properties under the modular group (or its various subgroups), a lattice in the group . An example is the Jacobi theta function which is a kind of modular form called a Jacobi form. This is sometimes written in terms of the nome . The constant is the unique constant making the Jacobi theta function an automorphic form, which means that it transforms in a specific way. Certain identities hold for all automorphic forms. An example is which implies that transforms as a representation under the discrete Heisenberg group. General modular forms and other theta functions also involve , once again because of the Stone–von Neumann theorem. Cauchy distribution and potential theory The Cauchy distribution is a probability density function. The total probability is equal to one, owing to the integral: The Shannon entropy of the Cauchy distribution is equal to , which also involves . The Cauchy distribution plays an important role in potential theory because it is the simplest Furstenberg measure, the classical Poisson kernel associated with a Brownian motion in a half-plane. Conjugate harmonic functions and so also the Hilbert transform are associated with the asymptotics of the Poisson kernel. The Hilbert transform H is the integral transform given by the Cauchy principal value of the singular integral The constant is the unique (positive) normalizing factor such that H defines a linear complex structure on the Hilbert space of square-integrable real-valued functions on the real line. The Hilbert transform, like the Fourier transform, can be characterized purely in terms of its transformation properties on the Hilbert space : up to a normalization factor, it is the unique bounded linear operator that commutes with positive dilations and anti-commutes with all reflections of the real line. The constant is the unique normalizing factor that makes this transformation unitary. In the Mandelbrot set An occurrence of in the fractal called the Mandelbrot set was discovered by David Boll in 1991. He examined the behaviour of the Mandelbrot set near the "neck" at . When the number of iterations until divergence for the point is multiplied by , the result approaches as approaches zero. The point at the cusp of the large "valley" on the right side of the Mandelbrot set behaves similarly: the number of iterations until divergence multiplied by the square root of tends to . Projective geometry Let be the set of all twice differentiable real functions that satisfy the ordinary differential equation . Then is a two-dimensional real vector space, with two parameters corresponding to a pair of initial conditions for the differential equation. For any , let be the evaluation functional, which associates to each the value of the function at the real point . Then, for each t, the kernel of is a one-dimensional linear subspace of . Hence defines a function from from the real line to the real projective line. This function is periodic, and the quantity can be characterized as the period of this map. This is notable in that the constant , rather than 2, appears naturally in this context. Outside mathematics Describing physical phenomena Although not a physical constant, appears routinely in equations describing fundamental principles of the universe, often because of 's relationship to the circle and to spherical coordinate systems. A simple formula from the field of classical mechanics gives the approximate period of a simple pendulum of length , swinging with a small amplitude ( is the earth's gravitational acceleration): One of the key formulae of quantum mechanics is Heisenberg's uncertainty principle, which shows that the uncertainty in the measurement of a particle's position (Δ) and momentum (Δ) cannot both be arbitrarily small at the same time (where is the Planck constant): The fact that is approximately equal to 3 plays a role in the relatively long lifetime of orthopositronium. The inverse lifetime to lowest order in the fine-structure constant is where is the mass of the electron. is present in some structural engineering formulae, such as the buckling formula derived by Euler, which gives the maximum axial load that a long, slender column of length , modulus of elasticity , and area moment of inertia can carry without buckling: The field of fluid dynamics contains in Stokes' law, which approximates the frictional force exerted on small, spherical objects of radius , moving with velocity in a fluid with dynamic viscosity : In electromagnetics, the vacuum permeability constant μ0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation. Before 20 May 2019, it was defined as exactly Memorizing digits Piphilology is the practice of memorizing large numbers of digits of , and world-records are kept by the Guinness World Records. The record for memorizing digits of , certified by Guinness World Records, is 70,000 digits, recited in India by Rajveer Meena in 9 hours and 27 minutes on 21 March 2015. In 2006, Akira Haraguchi, a retired Japanese engineer, claimed to have recited 100,000 decimal places, but the claim was not verified by Guinness World Records. One common technique is to memorize a story or poem in which the word lengths represent the digits of : The first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. Such memorization aids are called mnemonics. An early example of a mnemonic for pi, originally devised by English scientist James Jeans, is "How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." When a poem is used, it is sometimes referred to as a piem. Poems for memorizing have been composed in several languages in addition to English. Record-setting memorizers typically do not rely on poems, but instead use methods such as remembering number patterns and the method of loci. A few authors have used the digits of to establish a new form of constrained writing, where the word lengths are required to represent the digits of . The Cadaeic Cadenza contains the first 3835 digits of in this manner, and the full-length book Not a Wake contains 10,000 words, each representing one digit of . In popular culture Perhaps because of the simplicity of its definition and its ubiquitous presence in formulae, has been represented in popular culture more than other mathematical constructs. In the Palais de la Découverte (a science museum in Paris) there is a circular room known as the pi room. On its wall are inscribed 707 digits of . The digits are large wooden characters attached to the dome-like ceiling. The digits were based on an 1873 calculation by English mathematician William Shanks, which included an error beginning at the 528th digit. The error was detected in 1946 and corrected in 1949. In Carl Sagan's 1985 novel Contact it is suggested that the creator of the universe buried a message deep within the digits of . This part of the story was omitted from the film adaptation of the novel. The digits of have also been incorporated into the lyrics of the song "Pi" from the 2005 album Aerial by Kate Bush. In the 1967 Star Trek episode "Wolf in the Fold", an out-of-control computer is contained by being instructed to "Compute to the last digit the value of ". In the United States, Pi Day falls on 14 March (written 3/14 in the US style), and is popular among students. and its digital representation are often used by self-described "math geeks" for inside jokes among mathematically and technologically minded groups. A college cheer variously attributed to the Massachusetts Institute of Technology or the Rensselaer Polytechnic Institute includes "3.14159". Pi Day in 2015 was particularly significant because the date and time 3/14/15 9:26:53 reflected many more digits of pi. In parts of the world where dates are commonly noted in day/month/year format, 22 July represents "Pi Approximation Day", as 22/7 = 3.142857. Some have proposed replacing by , arguing that , as the number of radians in one turn or the ratio of a circle's circumference to its radius, is more natural than and simplifies many formulae. This use of has not made its way into mainstream mathematics, but since 2010 this has led to people celebrating Two Pi Day or Tau Day on June 28. In 1897, an amateur mathematician attempted to persuade the Indiana legislature to pass the Indiana Pi Bill, which described a method to square the circle and contained text that implied various incorrect values for , including 3.2. The bill is notorious as an attempt to establish a value of mathematical constant by legislative fiat. The bill was passed by the Indiana House of Representatives, but rejected by the Senate, and thus it did not become a law. In computer culture In contemporary internet culture, individuals and organizations frequently pay homage to the number . For instance, the computer scientist Donald Knuth let the version numbers of his program TeX approach . The versions are 3, 3.1, 3.14, and so forth. Many programming languages include for use in programs. Similarly, has been added to several programming languages as a predefined constant.
Mathematics
Counting and numbers
null
23604
https://en.wikipedia.org/wiki/Photography
Photography
Photography is the art, application, and practice of creating images by recording light, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film. It is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication. A person who captures or takes photographs is called a photographer. Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing. The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive, depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing. Before the emergence of digital photography, photographs on film had to be developed to produce negatives or projectable slides, and negatives had to be printed as positive images, usually in enlarged form. This was usually done by photographic laboratories, but many amateurs did their own processing. Etymology The word "photography" was created from the Greek roots (), genitive of (), "light" and () "representation by means of lines" or "drawing", together meaning "drawing with light". Several people may have coined the same new term from these roots independently. Hércules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, , in private notes which a Brazilian historian believes were written in 1834. This claim is widely reported but is not yet largely recognized internationally. The first use of the word by Florence became widely known after the research of Boris Kossoy in 1980. The German newspaper of 25 February 1839 contained an article entitled , discussing several priority claims – especially Henry Fox Talbot's – regarding Daguerre's claim of invention. The article is the earliest known occurrence of the word in public print. It was signed "J.M.", believed to have been Berlin astronomer Johann von Maedler. The astronomer John Herschel is also credited with coining the word, independent of Talbot, in 1839. The inventors Nicéphore Niépce, Talbot, and Louis Daguerre seem not to have known or used the word "photography", but referred to their processes as "Heliography" (Niépce), "Photogenic Drawing"/"Talbotype"/"Calotype" (Talbot), and "Daguerreotype" (Daguerre). History Precursor technologies Photography is the result of combining several technical discoveries, relating to seeing an image and capturing the image. The discovery of the camera obscura ("dark chamber" in Latin) that provides an image of a scene dates back to ancient China. Greek mathematicians Aristotle and Euclid independently described a camera obscura in the 5th and 4th centuries BCE. In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments. The Arab physicist Ibn al-Haytham (Alhazen) (965–1040) also invented a camera obscura as well as the first true pinhole camera. The invention of the camera has been traced back to the work of Ibn al-Haytham. While the effects of a single light passing through a pinhole had been described earlier, Ibn al-Haytham gave the first correct analysis of the camera obscura, including the first geometrical and quantitative descriptions of the phenomenon, and was the first to use a screen in a dark room so that an image from one side of a hole in the surface could be projected onto a screen on the other side. He also first understood the relationship between the focal point and the pinhole, and performed early experiments with afterimages, laying the foundations for the invention of photography in the 19th century. Leonardo da Vinci mentions natural camerae obscurae that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. It is a box with a small hole in one side, which allows specific light rays to enter, projecting an inverted image onto a viewing screen or paper. The birth of photography was then concerned with inventing means to capture and keep the image produced by the camera obscura. Albertus Magnus (1193–1280) discovered silver nitrate, and Georg Fabricius (1516–1571) discovered silver chloride, and the techniques described in Ibn al-Haytham's Book of Optics are capable of producing primitive photographs using medieval materials. Daniele Barbaro described a diaphragm in 1566. Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694. Around 1717, Johann Heinrich Schulze used a light-sensitive slurry to capture images of cut-out letters on a bottle and on that basis many German sources and some international ones credit Schulze as the inventor of photography. The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography. In June 1802, British inventor Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow copies of paintings on glass, it was reported in 1802 that "the images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver." The shadow images eventually darkened all over. Invention The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it. Niépce was successful again in 1825. In 1826 he made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens). Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. In partnership with Louis Daguerre, he worked out post-exposure processing methods that produced visually superior results and replaced the bitumen with a more light-sensitive resin, but hours of exposure in the camera were still required. With an eye to eventual commercial exploitation, the partners opted for total secrecy. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process. The essential elements—a silver-plated surface sensitized by iodine vapor, developed by mercury vapor, and "fixed" with hot saturated salt water—were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the several-minutes-long exposure to be visible. The existence of Daguerre's process was publicly announced, without details, on 7 January 1839. The news created an international sensation. France soon agreed to pay Daguerre a pension in exchange for the right to present his invention to the world as the gift of France, which occurred when complete working instructions were unveiled on 19 August 1839. In that same year, American photographer Robert Cornelius is credited with taking the earliest surviving photographic self-portrait. In Brazil, Hercules Florence had apparently started working out a silver-salt-based paper process in 1832, later naming it Photographie. Meanwhile, a British inventor, William Fox Talbot, had succeeded in making crude but reasonably light-fast silver images on paper as early as 1834 but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his hitherto secret method in a paper to the Royal Society and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, which used the chemical development of a latent image to greatly reduce the exposure needed and compete with the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies; this is the basis of most modern chemical photography up to the present day, as daguerreotypes could only be replicated by rephotographing them with a camera. Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence. In March 1837, Steinheil, along with Franz von Kobell, used silver chloride and a cardboard camera to make pictures in negative of the Frauenkirche and other buildings in Munich, then taking another picture of the negative to get a positive, the actual black and white reproduction of a view on the object. The pictures produced were round with a diameter of 4 cm, the method was later named the "Steinheil method". In France, Hippolyte Bayard invented his own process for producing direct positive paper prints and claimed to have invented photography earlier than Daguerre or Talbot. British chemist John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839. In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper. Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize in Physics in 1908. Glass plates were the medium for most original camera photography from the late 1850s until the general introduction of flexible plastic films during the 1890s. Although the convenience of the film greatly popularized amateur photography, early films were somewhat more expensive and of markedly lower optical quality than their glass plate equivalents, and until the late 1910s they were not available in the large formats preferred by most professional photographers, so the new medium did not immediately or completely replace the old. Because of the superior dimensional stability of glass, the use of plates for some scientific applications, such as astrophotography, continued into the 1990s, and in the niche field of laser holography, it has persisted into the 21st century. Film Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised. The first flexible photographic roll film was marketed by George Eastman, founder of Kodak in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and transferred to a hardened gelatin support. The first transparent plastic roll film followed in 1889. It was made from highly flammable nitrocellulose known as nitrate film. Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was not completed for X-ray films until 1933, and although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm motion pictures until it was finally discontinued in 1951. Films remained the dominant form of photography until the early 21st century when advances in digital photography drew consumers to digital formats. Although modern photography is dominated by digital users, film continues to be used by enthusiasts and professional photographers. The distinctive "look" of film based photographs compared to digital images is likely due to a combination of factors, including (1) differences in spectral and tonal sensitivity (S-shaped density-to-exposure (H&D curve) with film vs. linear response curve for digital CCD sensors), (2) resolution, and (3) continuity of tone. Black-and-white Originally, all photography was monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost, chemical stability, and its "classic" photographic look. The tones and contrast between light and dark areas define black-and-white photography. Monochromatic pictures are not necessarily composed of pure blacks, whites, and intermediate shades of gray but can involve shades of one particular hue depending on the process. The cyanotype process, for example, produces an image composed of blue tones. The albumen print process, publicly revealed in 1847, produces brownish tones. Many photographers continue to produce some monochrome images, sometimes because of the established archival permanence of well-processed silver-halide-based materials. Some full-color digital images are processed using a variety of techniques to create black-and-white results, and some manufacturers produce digital cameras that exclusively shoot monochrome. Monochrome printing or electronic display can be used to salvage certain photographs taken in color which are unsatisfactory in their original form; sometimes when presented as black-and-white or single-color-toned images they are found to be more effective. Although color photography has long predominated, monochrome images are still produced, mostly for artistic reasons. Almost all digital cameras have an option to shoot in monochrome, and almost all image editing software can combine or selectively discard RGB color channels to produce a monochrome image from one shot in color. Color Color photography was explored beginning in the 1840s. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light. The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by Scottish physicist James Clerk Maxwell in 1855. The foundation of virtually all practical color processes, Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image. Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s. Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images. Implementation of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability. Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s. Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multi-layer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure. Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently, available color films still employ a multi-layer emulsion and the same principles, most closely resembling Agfa's product. Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963. Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment. After a transition period centered around 1995–2005, color film was relegated to a niche market by inexpensive multi-megapixel digital cameras. Film continues to be the preference of some photographers because of its distinctive "look". Digital In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. The first digital camera to both record and save images in a digital format was the Fujix DS-1P created by Fujifilm in 1988. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single-lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born. Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications. Digital photography dominates the 21st century. More than 99% of photographs taken around the world are through digital cameras, increasingly through smartphones. Techniques A large variety of photographic techniques and media are used in the process of capturing images for photography. These include the camera; dualphotography; full-spectrum, ultraviolet and infrared media; light field photography; and other imaging techniques. Cameras The camera is the image-forming device, and a photographic plate, photographic film or a silicon electronic image sensor is the capture medium. The respective recording medium can be the plate or film itself, or a digital magnetic or electronic memory. Photographers control the camera and lens to "expose" the light recording material to the required amount of light to form a "latent image" (on plate or film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on a paper. The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. It was discovered and used in the 16th century by painters. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera). As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens. The movie camera is a type of photographic camera that takes a rapid sequence of photographs on recording medium. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures to create the illusion of motion. Stereoscopic Photographs, both monochrome and color, can be captured and displayed through two side-by-side images that emulate human stereoscopic vision. Stereoscopic photography was the first that captured figures in motion. While known colloquially as "3-D" photography, the more accurate term is stereoscopy. Such cameras have long been realized by using film and more recently in digital electronic methods (including cell phone cameras). Dualphotography Dualphotography consists of photographing a scene from both sides of a photographic device at once (e.g. camera for back-to-back dualphotography, or two networked cameras for portal-plane dualphotography). The dualphoto apparatus can be used to simultaneously capture both the subject and the photographer, or both sides of a geographical place at once, thus adding a supplementary narrative layer to that of a single image. Full-spectrum, ultraviolet and infrared Ultraviolet and infrared films have been available for many decades and employed in a variety of photographic avenues since the 1960s. New technological trends in digital photography have opened a new direction in full spectrum photography, where careful filtering choices across the ultraviolet, visible and infrared lead to new artistic visions. Modified digital cameras can detect some ultraviolet, all of the visible and much of the near infrared spectrum, as most digital imaging sensors are sensitive from about 350 nm to 1000 nm. An off-the-shelf digital camera contains an infrared hot mirror filter that blocks most of the infrared and a bit of the ultraviolet that would otherwise be detected by the sensor, narrowing the accepted range from about 400 nm to 700 nm. Replacing a hot mirror or infrared blocking filter with an infrared pass or a wide spectrally transmitting filter allows the camera to detect the wider spectrum light at greater sensitivity. Without the hot-mirror, the red, green and blue (or cyan, yellow and magenta) colored micro-filters placed over the sensor elements pass varying amounts of ultraviolet (blue window) and infrared (primarily red and somewhat lesser the green and blue micro-filters). Uses of full spectrum photography are for fine art photography, geology, forensics and law enforcement. Layering Layering is a photographic composition technique that manipulates the foreground, subject or middle-ground, and background layers in a way that they all work together to tell a story through the image. Layers may be incorporated by altering the focal length, distorting the perspective by positioning the camera in a certain spot. People, movement, light and a variety of objects can be used in layering. Light field Digital methods of image capture and display processing have enabled the new technology of "light field photography" (also known as synthetic aperture photography). This process allows focusing at various depths of field to be selected after the photograph has been captured. As explained by Michael Faraday in 1846, the "light field" is understood as 5-dimensional, with each point in 3-D space having attributes of two more angles that define the direction of each ray passing through that point. These additional vector attributes can be captured optically through the use of microlenses at each pixel point within the 2-dimensional image sensor. Every pixel of the final image is actually a selection from each sub-array located under each microlens, as identified by a post-image capture focus algorithm. Other Besides the camera, other methods of forming images with light are available. For instance, a photocopy or xerography machine forms permanent images but uses the transfer of static electrical charges rather than photographic medium, hence the term electrophotography. Photograms are images produced by the shadows of objects cast on the photographic paper, without the use of a camera. Objects can also be placed directly on the glass of an image scanner to produce digital pictures. Types Amateur Amateur photographers take photos for personal use, as a hobby or out of casual interest, rather than as a business or job. The quality of amateur work can be comparable to that of many professionals. Amateurs can fill a gap in subjects or topics that might not otherwise be photographed if they are not commercially useful or salable. Amateur photography grew during the late 19th century due to the popularization of the hand-held camera. Twenty-first century social media and near-ubiquitous camera phones have made photographic and video recording pervasive in everyday life. In the mid-2010s smartphone cameras added numerous automatic assistance features like color management, autofocus face detection and image stabilization that significantly decreased skill and effort needed to take high quality images. Commercial Commercial photography is probably best defined as any photography for which the photographer is paid for images rather than works of art. In this light, money could be paid for the subject of the photograph or the photograph itself. The commercial photographic world could include: Advertising photography: There are photographs made to illustrate and usually sell a service or product. These images, such as packshots, are generally done with an advertising agency, design firm or with an in-house corporate design team. Architectural photography focuses on capturing photographs of buildings and architectural structures that are aesthetically pleasing and accurate in terms of representations of their subjects. Event photography focuses on photographing guests and occurrences at mostly social events. Fashion and glamour photography usually incorporates models and is a form of advertising photography. Fashion photography, like the work featured in Harper's Bazaar, emphasizes clothes and other products; glamour emphasizes the model and body form while glamour photography is popular in advertising and men's magazines. Models in glamour photography sometimes work nude. 360 product photography displays a series of photos to give the impression of a rotating object. This technique is commonly used by ecommerce websites to help shoppers visualise products. Concert photography focuses on capturing candid images of both the artist or band as well as the atmosphere (including the crowd). Many of these photographers work freelance and are contracted through an artist or their management to cover a specific show. Concert photographs are often used to promote the artist or band in addition to the venue. Crime scene photography consists of photographing scenes of crime such as robberies and murders. A black and white camera or an infrared camera may be used to capture specific details. Still life photography usually depicts inanimate subject matter, typically commonplace objects which may be either natural or man-made. Still life is a broader category for food and some natural photography and can be used for advertising purposes. Real estate photography focuses on the production of photographs showcasing a property that is for sale, such photographs requires the use of wide-lens and extensive knowledge in high-dynamic-range imaging photography. Food photography can be used for editorial, packaging or advertising use. Food photography is similar to still life photography but requires some special skills. Photojournalism can be considered a subset of editorial photography. Photographs made in this context are accepted as a documentation of a news story. Paparazzi is a form of photojournalism in which the photographer captures candid images of athletes, celebrities, politicians, and other prominent people. Portrait and wedding photography: Are photographs made and sold directly to the end user of the images. Landscape photography typically captures the presence of nature but can also focus on human-made features or disturbances of landscapes. Wildlife photography demonstrates the life of wild animals. Art During the 20th century, both fine art photography and documentary photography became accepted by the English-speaking art world and the gallery system. In the United States, a handful of photographers, including Alfred Stieglitz, Edward Steichen, John Szarkowski, F. Holland Day, and Edward Weston, spent their lives advocating for photography as a fine art. At first, fine art photographers tried to imitate painting styles. This movement is called Pictorialism, often using soft focus for a dreamy, 'romantic' look. In reaction to that, Weston, Ansel Adams, and others formed the Group f/64 to advocate 'straight photography', the photograph as a (sharply focused) thing in itself and not an imitation of something else. The aesthetics of photography is a matter that continues to be discussed regularly, especially in artistic circles. Many artists argued that photography was the mechanical reproduction of an image. If photography is authentically art, then photography in the context of art would need redefinition, such as determining what component of a photograph makes it beautiful to the viewer. The controversy began with the earliest images "written with light"; Nicéphore Niépce, Louis Daguerre, and others among the very earliest photographers were met with acclaim, but some questioned if their work met the definitions and purposes of art. Clive Bell in his classic essay Art states that only "significant form" can distinguish art from what is not art. On 7 February 2007, Sotheby's London sold the 2001 photograph 99 Cent II Diptychon for an unprecedented $3,346,456 to an anonymous bidder, making it the most expensive at the time. Conceptual photography turns a concept or idea into a photograph. Even though what is depicted in the photographs are real objects, the subject is strictly abstract. In parallel to this development, the then largely separate interface between painting and photography was closed in the second half of the 20th century with the chemigram of Pierre Cordier and the chemogram of Josef H. Neumann. In 1974 the chemograms by Josef H. Neumann concluded the separation of the painterly background and the photographic layer by showing the picture elements in a symbiosis that had never existed before, as an unmistakable unique specimen, in a simultaneous painterly and at the same time real photographic perspective, using lenses, within a photographic layer, united in colors and shapes. This Neumann chemogram from the 1970s thus differs from the beginning of the previously created cameraless chemigrams of a Pierre Cordier and the photogram Man Ray or László Moholy-Nagy of the previous decades. These works of art were almost simultaneous with the invention of photography by various important artists who characterized Hippolyte Bayard, Thomas Wedgwood, William Henry Fox Talbot in their early stages, and later Man Ray and László Moholy-Nagy in the twenties and by the painter in the thirties Edmund Kesting and Christian Schad by draping objects directly onto appropriately sensitized photo paper and using a light source without a camera. Photojournalism Photojournalism is a particular form of photography (the collecting, editing, and presenting of news material for publication or broadcast) that employs images in order to tell a news story. It is now usually understood to refer only to still images, but in some cases the term also refers to video used in broadcast journalism. Photojournalism is distinguished from other close branches of photography (e.g., documentary photography, social documentary photography, street photography or celebrity photography) by complying with a rigid ethical framework which demands that the work be both honest and impartial whilst telling the story in strictly journalistic terms. Photojournalists create pictures that contribute to the news media, and help communities connect with one other. Photojournalists must be well informed and knowledgeable about events happening right outside their door. They deliver news in a creative format that is not only informative, but also entertaining, including sports photography. Science and forensics The camera has a long and distinguished history as a means of recording scientific phenomena from the first use by Daguerre and Fox-Talbot, such as astronomical events (eclipses for example), small creatures and plants when the camera was attached to the eyepiece of microscopes (in photomicroscopy) and for macro photography of larger specimens. The camera also proved useful in recording crime scenes and the scenes of accidents, such as the Wootton bridge collapse in 1861. The methods used in analysing photographs for use in legal cases are collectively known as forensic photography. Crime scene photos are usually taken from three vantage points: overview, mid-range, and close-up. In 1845 Francis Ronalds, the Honorary Director of the Kew Observatory, invented the first successful camera to make continuous recordings of meteorological and geomagnetic parameters. Different machines produced 12- or 24- hour photographic traces of the minute-by-minute variations of atmospheric pressure, temperature, humidity, atmospheric electricity, and the three components of geomagnetic forces. The cameras were supplied to numerous observatories around the world and some remained in use until well into the 20th century. Charles Brooke a little later developed similar instruments for the Greenwich Observatory. Science regularly uses image technology that has derived from the design of the pinhole camera to avoid distortions that can be caused by lenses. X-ray machines are similar in design to pinhole cameras, with high-grade filters and laser radiation. Photography has become universal in recording events and data in science and engineering, and at crime scenes or accident scenes. The method has been much extended by using other wavelengths, such as infrared photography and ultraviolet photography, as well as spectroscopy. Those methods were first used in the Victorian era and improved much further since that time. The first photographed atom was discovered in 2012 by physicists at Griffith University, Australia. They used an electric field to trap an "Ion" of the element, Ytterbium. The image was recorded on a CCD, an electronic photographic film. Wildlife photography Wildlife photography involves capturing images of various forms of wildlife. Unlike other forms of photography such as product or food photography, successful wildlife photography requires a photographer to choose the right place and right time when specific wildlife are present and active. It often requires great patience and considerable skill and command of the right photographic equipment. Social and cultural implications There are many ongoing questions about different aspects of photography. In her On Photography (1977), Susan Sontag dismisses the objectivity of photography. This is a highly debated subject within the photographic community. Sontag argues, "To photograph is to appropriate the thing photographed. It means putting one's self into a certain relation to the world that feels like knowledge, and therefore like power." Photographers decide what to take a photo of, what elements to exclude and what angle to frame the photo, and these factors may reflect a particular socio-historical context. Along these lines, it can be argued that photography is a subjective form of representation. Modern photography has raised a number of concerns on its effect on society. In Alfred Hitchcock's Rear Window (1954), the camera is presented as promoting voyeurism. 'Although the camera is an observation station, the act of photographing is more than passive observing'. The camera doesn't rape or even possess, though it may presume, intrude, trespass, distort, exploit, and, at the farthest reach of metaphor, assassinate – all activities that, unlike the sexual push and shove, can be conducted from a distance, and with some detachment. Digital imaging has raised ethical concerns because of the ease of manipulating digital photographs in post-processing. Many photojournalists have declared they will not crop their pictures or are forbidden from combining elements of multiple photos to make "photomontages", passing them as "real" photographs. Today's technology has made image editing relatively simple for even the novice photographer. However, recent changes of in-camera processing allow digital fingerprinting of photos to detect tampering for purposes of forensic photography. Photography is one of the new media forms that changes perception and changes the structure of society. Further unease has been caused around cameras in regards to desensitization. Fears that disturbing or explicit images are widely accessible to children and society at large have been raised. Particularly, photos of war and pornography are causing a stir. Sontag is concerned that "to photograph is to turn people into objects that can be symbolically possessed". Desensitization discussion goes hand in hand with debates about censored images. Sontag writes of her concern that the ability to censor pictures means the photographer has the ability to construct reality. One of the practices through which photography constitutes society is tourism. Tourism and photography combine to create a "tourist gaze" in which local inhabitants are positioned and defined by the camera lens. However, it has also been argued that there exists a "reverse gaze" through which indigenous photographees can position the tourist photographer as a shallow consumer of images. Law Photography is both restricted and protected by the law in many jurisdictions. Protection of photographs is typically achieved through the granting of copyright or moral rights to the photographer. In the United States, photography is protected as a First Amendment right and anyone is free to photograph anything seen in public spaces as long as it is in plain view. In the UK, a recent law (Counter-Terrorism Act 2008) increases the power of the police to prevent people, even press photographers, from taking pictures in public places. In South Africa, any person may photograph any other person, without their permission, in public spaces and the only specific restriction placed on what may not be photographed by government is related to anything classed as national security. Each country has different laws.
Technology
Visual arts
null
23617
https://en.wikipedia.org/wiki/Pump
Pump
A pump is a device that moves fluids (liquids or gases), or sometimes slurries, by mechanical action, typically converted from electrical energy into hydraulic or pneumatic energy. Mechanical pumps serve in a wide range of applications such as pumping water from wells, aquarium filtering, pond filtering and aeration, in the car industry for water-cooling and fuel injection, in the energy industry for pumping oil and natural gas or for operating cooling towers and other components of heating, ventilation and air conditioning systems. In the medical industry, pumps are used for biochemical processes in developing and manufacturing medicine, and as artificial replacements for body parts, in particular the artificial heart and penile prosthesis. When a pump contains two or more pump mechanisms with fluid being directed to flow through them in series, it is called a multi-stage pump. Terms such as two-stage or double-stage may be used to specifically describe the number of stages. A pump that does not fit this description is simply a single-stage pump in contrast. In biology, many different types of chemical and biomechanical pumps have evolved; biomimicry is sometimes used in developing new types of mechanical pumps. Types Mechanical pumps may be submerged in the fluid they are pumping or be placed external to the fluid. Pumps can be classified by their method of displacement into electromagnetic pumps, positive-displacement pumps, impulse pumps, velocity pumps, gravity pumps, steam pumps and valveless pumps. There are three basic types of pumps: positive-displacement, centrifugal and axial-flow pumps. In centrifugal pumps the direction of flow of the fluid changes by ninety degrees as it flows over an impeller, while in axial flow pumps the direction of flow is unchanged. Electromagnetic pump Positive-displacement pumps A positive-displacement pump makes a fluid move by trapping a fixed amount and forcing (displacing) that trapped volume into the discharge pipe. Some positive-displacement pumps use an expanding cavity on the suction side and a decreasing cavity on the discharge side. Liquid flows into the pump as the cavity on the suction side expands and the liquid flows out of the discharge as the cavity collapses. The volume is constant through each cycle of operation. Positive-displacement pump behavior and safety Positive-displacement pumps, unlike centrifugal, can theoretically produce the same flow at a given rotational speed no matter what the discharge pressure. Thus, positive-displacement pumps are constant flow machines. However, a slight increase in internal leakage as the pressure increases prevents a truly constant flow rate. A positive-displacement pump must not operate against a closed valve on the discharge side of the pump, because it has no shutoff head like centrifugal pumps. A positive-displacement pump operating against a closed discharge valve continues to produce flow and the pressure in the discharge line increases until the line bursts, the pump is severely damaged, or both. A relief or safety valve on the discharge side of the positive-displacement pump is therefore necessary. The relief valve can be internal or external. The pump manufacturer normally has the option to supply internal relief or safety valves. The internal valve is usually used only as a safety precaution. An external relief valve in the discharge line, with a return line back to the suction line or supply tank, provides increased safety. Positive-displacement types A positive-displacement pump can be further classified according to the mechanism used to move the fluid: Rotary-type positive displacement: internal and external gear pump, screw pump, lobe pump, shuttle block, flexible vane and sliding vane, circumferential piston, flexible impeller, helical twisted roots (e.g. the Wendelkolben pump) and liquid-ring pumps Reciprocating-type positive displacement: piston pumps, plunger pumps and diaphragm pumps Linear-type positive displacement: rope pumps and chain pumps Rotary positive-displacement pumps These pumps move fluid using a rotating mechanism that creates a vacuum that captures and draws in the liquid. Advantages: Rotary pumps are very efficient because they can handle highly viscous fluids with higher flow rates as viscosity increases. Drawbacks: The nature of the pump requires very close clearances between the rotating pump and the outer edge, making it rotate at a slow, steady speed. If rotary pumps are operated at high speeds, the fluids cause erosion, which eventually causes enlarged clearances that liquid can pass through, which reduces efficiency. Rotary positive-displacement pumps fall into five main types: Gear pumps – a simple type of rotary pump where the liquid is pushed around a pair of gears. Screw pumps – the shape of the internals of this pump is usually two screws turning against each other to pump the liquid Rotary vane pumps Hollow disc pumps (also known as eccentric disc pumps or hollow rotary disc pumps), similar to scroll compressors, these have an eccentric cylindrical rotor encased in a circular housing. As the rotor orbits, it traps fluid between the rotor and the casing, drawing the fluid through the pump. It is used for highly viscous fluids like petroleum-derived products, and it can also support high pressures of up to 290 psi. Peristaltic pumps have rollers which pinch a section of flexible tubing, forcing the liquid ahead as the rollers advance. Because they are very easy to keep clean, these are popular for dispensing food, medicine, and concrete. Reciprocating positive-displacement pumps Reciprocating pumps move the fluid using one or more oscillating pistons, plungers, or membranes (diaphragms), while valves restrict fluid motion to the desired direction. In order for suction to take place, the pump must first pull the plunger in an outward motion to decrease pressure in the chamber. Once the plunger pushes back, it will increase the chamber pressure and the inward pressure of the plunger will then open the discharge valve and release the fluid into the delivery pipe at constant flow rate and increased pressure. Pumps in this category range from simplex, with one cylinder, to in some cases quad (four) cylinders, or more. Many reciprocating-type pumps are duplex (two) or triplex (three) cylinder. They can be either single-acting with suction during one direction of piston motion and discharge on the other, or double-acting with suction and discharge in both directions. The pumps can be powered manually, by air or steam, or by a belt driven by an engine. This type of pump was used extensively in the 19th century—in the early days of steam propulsion—as boiler feed water pumps. Now reciprocating pumps typically pump highly viscous fluids like concrete and heavy oils, and serve in special applications that demand low flow rates against high resistance. Reciprocating hand pumps were widely used to pump water from wells. Common bicycle pumps and foot pumps for inflation use reciprocating action. These positive-displacement pumps have an expanding cavity on the suction side and a decreasing cavity on the discharge side. Liquid flows into the pumps as the cavity on the suction side expands and the liquid flows out of the discharge as the cavity collapses. The volume is constant given each cycle of operation and the pump's volumetric efficiency can be achieved through routine maintenance and inspection of its valves. Typical reciprocating pumps are: Plunger pump – a reciprocating plunger pushes the fluid through one or two open valves, closed by suction on the way back. Diaphragm pump – similar to plunger pumps, where the plunger pressurizes hydraulic oil which is used to flex a diaphragm in the pumping cylinder. Diaphragm valves are used to pump hazardous and toxic fluids. Piston pump displacement pumps – usually simple devices for pumping small amounts of liquid or gel manually. The common hand soap dispenser is such a pump. Radial piston pumpa form of hydraulic pump where pistons extend in a radial direction. Vibratory pump or vibration pumpa particularly low-cost form of plunger pump, popular in low-cost espresso machines. The only moving part is a spring-loaded piston, the armature of a solenoid. Driven by half-wave rectified alternating current, the piston is forced forward while energized, and is retracted by the spring during the other half cycle. Due to their inefficiency, vibratory pumps typically cannot be operated for more than one minute without overheating, so are limited to intermittent duty. Various positive-displacement pumps The positive-displacement principle applies in these pumps: Rotary lobe pump Progressing cavity pump Rotary gear pump Piston pump Diaphragm pump Screw pump Gear pump Hydraulic pump Rotary vane pump Peristaltic pump Rope pump Flexible impeller pump Gear pump This is the simplest form of rotary positive-displacement pumps. It consists of two meshed gears that rotate in a closely fitted casing. The tooth spaces trap fluid and force it around the outer periphery. The fluid does not travel back on the meshed part, because the teeth mesh closely in the center. Gear pumps see wide use in car engine oil pumps and in various hydraulic power packs. Screw pump A screw pump is a more complicated type of rotary pump that uses two or three screws with opposing thread — e.g., one screw turns clockwise and the other counterclockwise. The screws are mounted on parallel shafts that often have gears that mesh so the shafts turn together and everything stays in place. In some cases the driven screw drives the secondary screw, without gears, often using the fluid to limit abrasion. The screws turn on the shafts and drive fluid through the pump. As with other forms of rotary pumps, the clearance between moving parts and the pump's casing is minimal. Progressing cavity pump Widely used for pumping difficult materials, such as sewage sludge contaminated with large particles, a progressing cavity pump consists of a helical rotor, about ten times as long as its width. This can be visualized as a central core of diameter x with, typically, a curved spiral wound around of thickness half x, though in reality it is manufactured in a single casting. This shaft fits inside a heavy-duty rubber sleeve, of wall thickness also typically x. As the shaft rotates, the rotor gradually forces fluid up the rubber sleeve. Such pumps can develop very high pressure at low volumes. Roots-type pump Named after the Roots brothers who invented it, this lobe pump displaces the fluid trapped between two long helical rotors, each fitted into the other when perpendicular at 90°, rotating inside a triangular shaped sealing line configuration, both at the point of suction and at the point of discharge. This design produces a continuous flow with equal volume and no vortex. It can work at low pulsation rates, and offers gentle performance that some applications require. Applications include: High capacity industrial air compressors. Roots superchargers on internal combustion engines. A brand of civil defense siren, the Federal Signal Corporation's Thunderbolt. Peristaltic pump A peristaltic pump is a type of positive-displacement pump. It contains fluid within a flexible tube fitted inside a circular pump casing (though linear peristaltic pumps have been made). A number of rollers, shoes, or wipers attached to a rotor compress the flexible tube. As the rotor turns, the part of the tube under compression closes (or occludes), forcing the fluid through the tube. Additionally, when the tube opens to its natural state after the passing of the cam it draws (restitution) fluid into the pump. This process is called peristalsis and is used in many biological systems such as the gastrointestinal tract. Plunger pumpsPlunger pumps are reciprocating positive-displacement pumps. These consist of a cylinder with a reciprocating plunger. The suction and discharge valves are mounted in the head of the cylinder. In the suction stroke, the plunger retracts and the suction valves open causing suction of fluid into the cylinder. In the forward stroke, the plunger pushes the liquid out of the discharge valve. Efficiency and common problems: With only one cylinder in plunger pumps, the fluid flow varies between maximum flow when the plunger moves through the middle positions, and zero flow when the plunger is at the end positions. A lot of energy is wasted when the fluid is accelerated in the piping system. Vibration and water hammer may be a serious problem. In general, the problems are compensated for by using two or more cylinders not working in phase with each other. Centrifugal pumps are also susceptible to water hammer. Surge analysis, a specialized study, helps evaluate this risk in such systems. Triplex-style plunger pump Triplex plunger pumps use three plungers, which reduces the pulsation relative to single reciprocating plunger pumps. Adding a pulsation dampener on the pump outlet can further smooth the pump ripple, or ripple graph of a pump transducer. The dynamic relationship of the high-pressure fluid and plunger generally requires high-quality plunger seals. Plunger pumps with a larger number of plungers have the benefit of increased flow, or smoother flow without a pulsation damper. The increase in moving parts and crankshaft load is one drawback. Car washes often use these triplex-style plunger pumps (perhaps without pulsation dampers). In 1968, William Bruggeman reduced the size of the triplex pump and increased the lifespan so that car washes could use equipment with smaller footprints. Durable high-pressure seals, low-pressure seals and oil seals, hardened crankshafts, hardened connecting rods, thick ceramic plungers and heavier duty ball and roller bearings improve reliability in triplex pumps. Triplex pumps now are in a myriad of markets across the world. Triplex pumps with shorter lifetimes are commonplace to the home user. A person who uses a home pressure washer for 10 hours a year may be satisfied with a pump that lasts 100 hours between rebuilds. Industrial-grade or continuous duty triplex pumps on the other end of the quality spectrum may run for as much as 2,080 hours a year. The oil and gas drilling industry uses massive semi-trailer-transported triplex pumps called mud pumps to pump drilling mud, which cools the drill bit and carries the cuttings back to the surface. Drillers use triplex or even quintuplex pumps to inject water and solvents deep into shale in the extraction process called fracking. Diaphragm pump Typically run on electricity compressed air, diaphragm pumps are relatively inexpensive and can perform a wide variety of duties, from pumping air into an aquarium, to liquids through a filter press. Double-diaphragm pumps can handle viscous fluids and abrasive materials with a gentle pumping process ideal for transporting shear-sensitive media. Rope pump Devised in China as chain pumps over 1000 years ago, these pumps can be made from very simple materials: A rope, a wheel and a pipe are sufficient to make a simple rope pump. Rope pump efficiency has been studied by grassroots organizations and the techniques for making and running them have been continuously improved. Impulse pump Impulse pumps use pressure created by gas (usually air). In some impulse pumps the gas trapped in the liquid (usually water), is released and accumulated somewhere in the pump, creating a pressure that can push part of the liquid upwards. Conventional impulse pumps include: Hydraulic ram pumps – kinetic energy of a low-head water supply is stored temporarily in an air-bubble hydraulic accumulator, then used to drive water to a higher head. Pulser pumps – run with natural resources, by kinetic energy only. Airlift pumps – run on air inserted into pipe, which pushes the water up when bubbles move upward Instead of a gas accumulation and releasing cycle, the pressure can be created by burning of hydrocarbons. Such combustion driven pumps directly transmit the impulse from a combustion event through the actuation membrane to the pump fluid. In order to allow this direct transmission, the pump needs to be almost entirely made of an elastomer (e.g. silicone rubber). Hence, the combustion causes the membrane to expand and thereby pumps the fluid out of the adjacent pumping chamber. The first combustion-driven soft pump was developed by ETH Zurich. Hydraulic ram pump A hydraulic ram is a water pump powered by hydropower. It takes in water at relatively low pressure and high flow-rate and outputs water at a higher hydraulic-head and lower flow-rate. The device uses the water hammer effect to develop pressure that lifts a portion of the input water that powers the pump to a point higher than where the water started. The hydraulic ram is sometimes used in remote areas, where there is both a source of low-head hydropower, and a need for pumping water to a destination higher in elevation than the source. In this situation, the ram is often useful, since it requires no outside source of power other than the kinetic energy of flowing water. Velocity pumps Rotodynamic pumps (or dynamic pumps) are a type of velocity pump in which kinetic energy is added to the fluid by increasing the flow velocity. This increase in energy is converted to a gain in potential energy (pressure) when the velocity is reduced prior to or as the flow exits the pump into the discharge pipe. This conversion of kinetic energy to pressure is explained by the First law of thermodynamics, or more specifically by Bernoulli's principle. Dynamic pumps can be further subdivided according to the means in which the velocity gain is achieved. These types of pumps have a number of characteristics: Continuous energy Conversion of added energy to increase in kinetic energy (increase in velocity) Conversion of increased velocity (kinetic energy) to an increase in pressure head A practical difference between dynamic and positive-displacement pumps is how they operate under closed valve conditions. Positive-displacement pumps physically displace fluid, so closing a valve downstream of a positive-displacement pump produces a continual pressure build up that can cause mechanical failure of pipeline or pump. Dynamic pumps differ in that they can be safely operated under closed valve conditions (for short periods of time). Radial-flow pump Such a pump is also referred to as a centrifugal pump. The fluid enters along the axis or center, is accelerated by the impeller and exits at right angles to the shaft (radially); an example is the centrifugal fan, which is commonly used to implement a vacuum cleaner. Another type of radial-flow pump is a vortex pump. The liquid in them moves in tangential direction around the working wheel. The conversion from the mechanical energy of motor into the potential energy of flow comes by means of multiple whirls, which are excited by the impeller in the working channel of the pump. Generally, a radial-flow pump operates at higher pressures and lower flow rates than an axial- or a mixed-flow pump. Axial-flow pump These are also referred to as all-fluid pumps. The fluid is pushed outward or inward to move fluid axially. They operate at much lower pressures and higher flow rates than radial-flow (centrifugal) pumps. Axial-flow pumps cannot be run up to speed without special precaution. If at a low flow rate, the total head rise and high torque associated with this pipe would mean that the starting torque would have to become a function of acceleration for the whole mass of liquid in the pipe system. Mixed-flow pumps function as a compromise between radial and axial-flow pumps. The fluid experiences both radial acceleration and lift and exits the impeller somewhere between 0 and 90 degrees from the axial direction. As a consequence mixed-flow pumps operate at higher pressures than axial-flow pumps while delivering higher discharges than radial-flow pumps. The exit angle of the flow dictates the pressure head-discharge characteristic in relation to radial and mixed-flow. Regenerative turbine pump Also known as drag, friction, liquid-ring pump, peripheral, traction, turbulence, or vortex pumps, regenerative turbine pumps are a class of rotodynamic pump that operates at high head pressures, typically . The pump has an impeller with a number of vanes or paddles which spins in a cavity. The suction port and pressure ports are located at the perimeter of the cavity and are isolated by a barrier called a stripper, which allows only the tip channel (fluid between the blades) to recirculate, and forces any fluid in the side channel (fluid in the cavity outside of the blades) through the pressure port. In a regenerative turbine pump, as fluid spirals repeatedly from a vane into the side channel and back to the next vane, kinetic energy is imparted to the periphery, thus pressure builds with each spiral, in a manner similar to a regenerative blower. As regenerative turbine pumps cannot become vapor locked, they are commonly applied to volatile, hot, or cryogenic fluid transport. However, as tolerances are typically tight, they are vulnerable to solids or particles causing jamming or rapid wear. Efficiency is typically low, and pressure and power consumption typically decrease with flow. Additionally, pumping direction can be reversed by reversing direction of spin. Side-channel pump A side-channel pump has a suction disk, an impeller, and a discharge disk. Eductor-jet pump This uses a jet, often of steam, to create a low pressure. This low pressure sucks in fluid and propels it into a higher-pressure region. Gravity pumps Gravity pumps include the syphon and Heron's fountain. The hydraulic ram is also sometimes called a gravity pump. In a gravity pump the fluid is lifted by gravitational force. Steam pump Steam pumps have been for a long time mainly of historical interest. They include any type of pump powered by a steam engine and also pistonless pumps such as Thomas Savery's or the Pulsometer steam pump. Recently there has been a resurgence of interest in low-power solar steam pumps for use in smallholder irrigation in developing countries. Previously small steam engines have not been viable because of escalating inefficiencies as vapour engines decrease in size. However the use of modern engineering materials coupled with alternative engine configurations has meant that these types of system are now a cost-effective opportunity. Valveless pumps Valveless pumping assists in fluid transport in various biomedical and engineering systems. In a valveless pumping system, no valves (or physical occlusions) are present to regulate the flow direction. The fluid pumping efficiency of a valveless system, however, is not necessarily lower than that having valves. In fact, many fluid-dynamical systems in nature and engineering more or less rely upon valveless pumping to transport the working fluids therein. For instance, blood circulation in the cardiovascular system is maintained to some extent even when the heart's valves fail. Meanwhile, the embryonic vertebrate heart begins pumping blood long before the development of discernible chambers and valves. Similar to blood circulation in one direction, bird respiratory systems pump air in one direction in rigid lungs, but without any physiological valve. In microfluidics, valveless impedance pumps have been fabricated, and are expected to be particularly suitable for handling sensitive biofluids. Ink jet printers operating on the piezoelectric transducer principle also use valveless pumping. The pump chamber is emptied through the printing jet due to reduced flow impedance in that direction and refilled by capillary action. Pump repairs Examining pump repair records and mean time between failures (MTBF) is of great importance to responsible and conscientious pump users. In view of that fact, the preface to the 2006 Pump User's Handbook alludes to "pump failure" statistics. For the sake of convenience, these failure statistics often are translated into MTBF (in this case, installed life before failure). In early 2005, Gordon Buck, John Crane Inc.'s chief engineer for field operations in Baton Rouge, Louisiana, examined the repair records for a number of refinery and chemical plants to obtain meaningful reliability data for centrifugal pumps. A total of 15 operating plants having nearly 15,000 pumps were included in the survey. The smallest of these plants had about 100 pumps; several plants had over 2000. All facilities were located in the United States. In addition, considered as "new", others as "renewed" and still others as "established". Many of these plants—but not all—had an alliance arrangement with John Crane. In some cases, the alliance contract included having a John Crane Inc. technician or engineer on-site to coordinate various aspects of the program. Not all plants are refineries, however, and different results occur elsewhere. In chemical plants, pumps have historically been "throw-away" items as chemical attack limits life. Things have improved in recent years, but the somewhat restricted space available in "old" DIN and ASME-standardized stuffing boxes places limits on the type of seal that fits. Unless the pump user upgrades the seal chamber, the pump only accommodates more compact and simple versions. Without this upgrading, lifetimes in chemical installations are generally around 50 to 60 percent of the refinery values. Unscheduled maintenance is often one of the most significant costs of ownership, and failures of mechanical seals and bearings are among the major causes. Keep in mind the potential value of selecting pumps that cost more initially, but last much longer between repairs. The MTBF of a better pump may be one to four years longer than that of its non-upgraded counterpart. Consider that published average values of avoided pump failures range from US$2600 to US$12,000. This does not include lost opportunity costs. One pump fire occurs per 1000 failures. Having fewer pump failures means having fewer destructive pump fires. As has been noted, a typical pump failure, based on actual year 2002 reports, costs US$5,000 on average. This includes costs for material, parts, labor and overhead. Extending a pump's MTBF from 12 to 18 months would save US$1,667 per year — which might be greater than the cost to upgrade the centrifugal pump's reliability.Submersible slurry pumps in high demand. Engineeringnews.co.za. Retrieved on 2011-05-25. Applications Pumps are used throughout society for a variety of purposes. Early applications includes the use of the windmill or watermill to pump water. Today, the pump is used for irrigation, water supply, gasoline supply, air conditioning systems, refrigeration (usually called a compressor), chemical movement, sewage movement, flood control, marine services, etc. Because of the wide variety of applications, pumps have a plethora of shapes and sizes: from very large to very small, from handling gas to handling liquid, from high pressure to low pressure, and from high volume to low volume. Priming a pump Typically, a liquid pump cannot simply draw air. The feed line of the pump and the internal body surrounding the pumping mechanism must first be filled with the liquid that requires pumping: An operator must introduce liquid into the system to initiate the pumping, known as priming the pump. Loss of prime is usually due to ingestion of air into the pump, or evaporation of the working fluid if the pump is used infrequently. Clearances and displacement ratios in pumps for liquids are insufficient for pumping compressible gas, so air or other gasses in the pump can not be evacuated by the pump's action alone. This is the case with most velocity (rotodynamic) pumps — for example, centrifugal pumps. For such pumps, the position of the pump and intake tubing should be lower than the suction point so it is primed by gravity; otherwise the pump should be manually filled with liquid or a secondary pump should be used until all air is removed from the suction line and the pump casing. Liquid ring pumps have a dedicated intake for the priming liquid separate from the intake of the fluid being pumped, as the fluid being pumped may be a gas or mix of gas, liquid, and solids. For these pumps the priming liquid intake must be supplied continuously (either by gravity or pressure), however the intake for the fluid being pumped is capable of drawing a vacuum equivalent to the boiling point of the priming liquid. Positive–displacement pumps, however, tend to have sufficiently tight sealing between the moving parts and the casing or housing of the pump that they can be described as self-priming. Such pumps can also serve as priming pumps, so-called when they are used to fulfill that need for other pumps in lieu of action taken by a human operator. Pumps as public water supplies One sort of pump once common worldwide was a hand-powered water pump, or 'pitcher pump'. It was commonly installed over community water wells in the days before piped water supplies. In parts of the British Isles, it was often called the parish pump. Though such community pumps are no longer common, people still used the expression parish pump to describe a place or forum where matters of local interest are discussed. Because water from pitcher pumps is drawn directly from the soil, it is more prone to contamination. If such water is not filtered and purified, consumption of it might lead to gastrointestinal or other water-borne diseases. A notorious case is the 1854 Broad Street cholera outbreak. At the time it was not known how cholera was transmitted, but physician John Snow suspected contaminated water and had the handle of the public pump he suspected removed; the outbreak then subsided. Modern hand-operated community pumps are considered the most sustainable low-cost option for safe water supply in resource-poor settings, often in rural areas in developing countries. A hand pump opens access to deeper groundwater that is often not polluted and also improves the safety of a well by protecting the water source from contaminated buckets. Pumps such as the Afridev pump are designed to be cheap to build and install, and easy to maintain with simple parts. However, scarcity of spare parts for these type of pumps in some regions of Africa has diminished their utility for these areas. Sealing multiphase pumping applications Multiphase pumping applications, also referred to as tri-phase, have grown due to increased oil drilling activity. In addition, the economics of multiphase production is attractive to upstream operations as it leads to simpler, smaller in-field installations, reduced equipment costs and improved production rates. In essence, the multiphase pump can accommodate all fluid stream properties with one piece of equipment, which has a smaller footprint. Often, two smaller multiphase pumps are installed in series rather than having just one massive pump. Types and features of multiphase pumps Helico-axial (centrifugal) A rotodynamic pump with one single shaft that requires two mechanical seals, this pump uses an open-type axial impeller. It is often called a Poseidon pump, and can be described as a cross between an axial compressor and a centrifugal pump. Twin-screw (positive-displacement) The twin-screw pump is constructed of two inter-meshing screws that move the pumped fluid. Twin screw pumps are often used when pumping conditions contain high gas volume fractions and fluctuating inlet conditions. Four mechanical seals are required to seal the two shafts. Progressive cavity (positive-displacement) When the pumping application is not suited to a centrifugal pump, a progressive cavity pump is used instead. Progressive cavity pumps are single-screw types typically used in shallow wells or at the surface. This pump is mainly used on surface applications where the pumped fluid may contain a considerable amount of solids such as sand and dirt. The volumetric efficiency and mechanical efficiency of a progressive cavity pump increases as the viscosity of the liquid does. Electric submersible (centrifugal) These pumps are basically multistage centrifugal pumps and are widely used in oil well applications as a method for artificial lift. These pumps are usually specified when the pumped fluid is mainly liquid.Buffer tankA buffer tank is often installed upstream of the pump suction nozzle in case of a slug flow. The buffer tank breaks the energy of the liquid slug, smooths any fluctuations in the incoming flow and acts as a sand trap. As the name indicates, multiphase pumps and their mechanical seals can encounter a large variation in service conditions such as changing process fluid composition, temperature variations, high and low operating pressures and exposure to abrasive/erosive media. The challenge is selecting the appropriate mechanical seal arrangement and support system to ensure maximized seal life and its overall effectiveness.John Crane Seal Sentinel – John Crane Increases Production Capabilities with Machine that Streamlines Four Machining Functions into One . Sealsentinel.com. Retrieved on 2011-05-25. Specifications Pumps are commonly rated by horsepower, volumetric flow rate, outlet pressure in metres (or feet) of head, inlet suction in suction feet (or metres) of head. The head can be simplified as the number of feet or metres the pump can raise or lower a column of water at atmospheric pressure. From an initial design point of view, engineers often use a quantity termed the specific speed to identify the most suitable pump type for a particular combination of flow rate and head. Net Positive Suction Head (NPSH) is crucial for pump performance. It has two key aspects: 1) NPSHr (Required): The Head required for the pump to operate without cavitation issues. 2) NPSHa (Available): The actual pressure provided by the system (e.g., from an overhead tank). For optimal pump operation, NPSHa must always exceed NPSHr. This ensures the pump has enough pressure to prevent cavitation, a damaging condition. Pumping power The power imparted into a fluid increases the energy of the fluid per unit volume. Thus the power relationship is between the conversion of the mechanical energy of the pump mechanism and the fluid elements within the pump. In general, this is governed by a series of simultaneous differential equations, known as the Navier–Stokes equations. However a more simple equation relating only the different energies in the fluid, known as Bernoulli's equation can be used. Hence the power, P, required by the pump: where Δp is the change in total pressure between the inlet and outlet (in Pa), and Q, the volume flow-rate of the fluid is given in m3/s. The total pressure may have gravitational, static pressure and kinetic energy components; i.e. energy is distributed between change in the fluid's gravitational potential energy (going up or down hill), change in velocity, or change in static pressure. η is the pump efficiency, and may be given by the manufacturer's information, such as in the form of a pump curve, and is typically derived from either fluid dynamics simulation (i.e. solutions to the Navier–Stokes for the particular pump geometry), or by testing. The efficiency of the pump depends upon the pump's configuration and operating conditions (such as rotational speed, fluid density and viscosity etc.) For a typical "pumping" configuration, the work is imparted on the fluid, and is thus positive. For the fluid imparting the work on the pump (i.e. a turbine), the work is negative. Power required to drive the pump is determined by dividing the output power by the pump efficiency. Furthermore, this definition encompasses pumps with no moving parts, such as a siphon. Efficiency Pump efficiency is defined as the ratio of the power imparted on the fluid by the pump in relation to the power supplied to drive the pump. Its value is not fixed for a given pump, efficiency is a function of the discharge and therefore also operating head. For centrifugal pumps, the efficiency tends to increase with flow rate up to a point midway through the operating range (peak efficiency or Best Efficiency Point (BEP) ) and then declines as flow rates rise further. Pump performance data such as this is usually supplied by the manufacturer before pump selection. Pump efficiencies tend to decline over time due to wear (e.g. increasing clearances as impellers reduce in size). When a system includes a centrifugal pump, an important design issue is matching the head loss-flow characteristic with the pump so that it operates at or close to the point of its maximum efficiency. Pump efficiency is an important aspect and pumps should be regularly tested. Thermodynamic pump testing is one method. Minimum flow protection Most large pumps have a minimum flow requirement below which the pump may be damaged by overheating, impeller wear, vibration, seal failure, drive shaft damage or poor performance. A minimum flow protection system ensures that the pump is not operated below the minimum flow rate. The system protects the pump even if it is shut-in or dead-headed, that is, if the discharge line is completely closed. The simplest minimum flow system is a pipe running from the pump discharge line back to the suction line. This line is fitted with an orifice plate sized to allow the pump minimum flow to pass. The arrangement ensures that the minimum flow is maintained, although it is wasteful as it recycles fluid even when the flow through the pump exceeds the minimum flow. A more sophisticated, but more costly, system (see diagram) comprises a flow measuring device (FE) in the pump discharge which provides a signal into a flow controller (FIC) which actuates a flow control valve (FCV) in the recycle line. If the measured flow exceeds the minimum flow then the FCV is closed. If the measured flow falls below the minimum flow the FCV opens to maintain the minimum flowrate. As the fluids are recycled the kinetic energy of the pump increases the temperature of the fluid. For many pumps this added heat energy is dissipated through the pipework. However, for large industrial pumps, such as oil pipeline pumps, a recycle cooler is provided in the recycle line to cool the fluids to the normal suction temperature. Alternatively the recycled fluids may be returned to upstream of the export cooler in an oil refinery, oil terminal, or offshore installation.
Technology
Hydraulics and pneumatics
null