id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
309620 | https://en.wikipedia.org/wiki/Trisodium%20phosphate | Trisodium phosphate | Trisodium phosphate (TSP) is an inorganic compound with the chemical formula . It is a white, granular or crystalline solid, highly soluble in water, producing an alkaline solution. TSP is used as a cleaning agent, builder, lubricant, food additive, stain remover, and degreaser.
As an item of commerce TSP is often partially hydrated and may range from anhydrous to the dodecahydrate . Most often it is found in white powder form. It can also be called trisodium orthophosphate or simply sodium phosphate.
Production
Trisodium phosphate is produced by neutralization of phosphoric acid using sodium carbonate, which produces disodium hydrogen phosphate. The disodium hydrogen phosphate is reacted with sodium hydroxide to form trisodium phosphate and water.
Na2CO3 + H3PO4 -> Na2HPO4 + CO2 + H2O
Na2HPO4 + NaOH -> Na3PO4 + H2O
Uses
Cleaning
Trisodium phosphate was at one time extensively used in formulations for a variety of consumer-grade soaps and detergents, and the most common use for trisodium phosphate has been in cleaning agents. The pH of a 1% solution is 12 (i.e., very basic), and the solution is sufficiently alkaline to saponify grease and oils. In combination with surfactants, TSP is an excellent agent for cleaning everything from laundry to concrete driveways. This versatility and low manufacturing price made TSP the basis for a plethora of cleaning products sold in the mid-20th century.
TSP is still sold and used as a cleaning agent, but since the late 1960s, its use has diminished in the United States and many other parts of the world because, like many phosphate-based cleaners, it is known to cause extensive eutrophication of lakes and rivers once it enters a water system.
Although it is still the active ingredient in some toilet bowl-cleaning tablets, TSP is generally not recommended for cleaning bathrooms because it can stain metal fixtures and can damage grout.
Chlorination
With the formula the material called chlorinated trisodium phosphate is used as a disinfectant and bleach, like sodium hypochlorite. It is prepared using NaOCl in place of some of the base to neutralize phosphoric acid.
Flux
In the U.S., trisodium phosphate is an approved flux for use in hard soldering joints in medical-grade copper plumbing. The flux is applied as a concentrated water solution and dissolves copper oxides at the temperature used in copper brazing. Residues are water-soluble and can be rinsed out before plumbing is put into service.
TSP is used as an ingredient in fluxes designed to deoxygenate nonferrous metals for casting. It can be used in ceramic production to lower the flow point of glazes.
Painting enhancement
TSP is still in common use for the cleaning, degreasing, and deglossing of walls prior to painting. TSP breaks the gloss of oil-based paints and opens the pores of latex-based paint, providing a surface better suited for the adhesion of the subsequent layer.
Food additive
Sodium phosphates including monosodium phosphate, disodium phosphate, and trisodium phosphate are approved as food additives in the EU. They are commonly used as acidity regulators and have the collective E number E339. The United States Food and Drug Administration lists sodium phosphates as generally recognized as safe.
Exercise performance enhancement
Trisodium phosphate has gained a following as a nutritional supplement that can improve certain parameters of exercise performance. The basis of this belief is the fact that phosphate is required for the energy-producing Krebs cycle central to aerobic metabolism. Phosphates are available from a number of other sources that are much milder than TSP.
Regulation
In the Western world, phosphate usage has declined because of damage it causes to lakes and rivers through eutrophication.
Substitutes
By the end of the 20th century, many products that formerly contained TSP were manufactured with TSP substitutes, which consist mainly of sodium carbonate along with various admixtures of nonionic surfactants and a limited percentage of sodium phosphates.
Products sold as TSP substitutes, containing soda ash and zeolites, are promoted as direct substitutes. However, sodium carbonate is not as strongly basic as trisodium phosphate, making it less effective in demanding applications. Zeolites, which are clay based, are added to laundry detergents as water softening agents and are essentially non-polluting; however, zeolites do not dissolve and can deposit a fine, powdery residue in the wash tub. Cleaning products labeled as TSP may contain other ingredients, with perhaps less than 50% trisodium phosphate.
| Physical sciences | Phosphoric oxyanions | Chemistry |
309884 | https://en.wikipedia.org/wiki/Bone%20tumor | Bone tumor | A bone tumor is an abnormal growth of tissue in bone, traditionally classified as noncancerous (benign) or cancerous (malignant). Cancerous bone tumors usually originate from a cancer in another part of the body such as from lung, breast, thyroid, kidney and prostate. There may be a lump, pain, or neurological signs from pressure. A bone tumor might present with a pathologic fracture. Other symptoms may include fatigue, fever, weight loss, anemia and nausea. Sometimes there are no symptoms and the tumour is found when investigating another problem.
Diagnosis is generally by X-ray and other radiological tests such as CT scan, MRI, PET scan and bone scintigraphy. Blood tests might include a complete blood count, inflammatory markers, serum electrophoresis, PSA, kidney function and liver function. Urine may be tested for Bence Jones protein. For confirmation of diagnosis, a biopsy for histological evaluation might be required.
The most common bone tumor is a non-ossifying fibroma. Average five-year survival in the United States after being diagnosed with bone and joint cancer is 67%. The earliest known bone tumor was an osteosarcoma in a foot bone discovered in South Africa, between 1.6 and 1.8 million years ago.
Classification
Bone tumors are traditionally classified as noncancerous (benign) or cancerous (malignant). Several features of bone tumors and soft tissue tumors overlap. Their classification was revised by the World Health Organization (WHO) in 2020. This newer classification categorises bone tumors into cartilage tumors, osteogenic tumors, fibrogenic tumors, vascular tumors of bone, osteoclastic giant cell-rich tumors, notochordal tumors, other mesenchymal tumors of bone, and hematopoietic neoplasms of bone.
Bone tumors may be classified as "primary tumors", which originate in bone or from bone-derived cells and tissues, and "secondary tumors" which originate in other sites and spread (metastasize) to the skeleton. Carcinomas of the prostate, breasts, lungs, thyroid, and kidneys are the carcinomas that most commonly metastasize to bone. Secondary malignant bone tumors are estimated to be 50 to 100 times as common as primary bone cancers.
Primary bone tumors
Primary tumors of bone can be divided into benign tumors and cancers. Common benign bone tumors may be neoplastic, developmental, traumatic, infectious, or inflammatory in etiology. Some benign tumors are not true neoplasms, but rather, represent hamartomas, namely the osteochondroma. The most common locations for many primary tumors, both benign and malignant include the distal femur and proximal tibia (around the knee joint). Examples of benign bone tumors include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant cell tumor of bone and aneurysmal bone cyst.
Malignant primary bone tumors, known as bone sarcomas, include osteosarcoma, chondrosarcoma, Ewing sarcoma, fibrosarcoma, and other types. While malignant fibrous histiocytoma (MFH) - now generally called "pleomorphic undifferentiated sarcoma" - primary in bone is known to occur occasionally, current paradigms tend to consider MFH a wastebasket diagnosis, and the current trend is toward using specialized studies (i.e. genetic and immunohistochemical tests) to classify these undifferentiated tumors into other tumor classes. Multiple myeloma is a hematologic cancer, originating in the bone marrow, which also frequently presents as one or more bone lesions.
Germ cell tumors, including teratoma, often present and originate in the midline of the sacrum, coccyx, or both. These sacrococcygeal teratomas are often relatively amenable to treatment.
Secondary bone tumors
Secondary bone tumors are metastatic lesions which have spread from other organs, most commonly carcinomas of the breast, lung, and prostate. Rarely, primary bone malignancies such as osteosarcoma may also spread to other bones.
Reliable and valid statistics on the incidence, prevalence, and mortality of malignant bone tumours are difficult to come by, particularly in older adults (those over 75 years of age) - because carcinomas that are widely metastatic to bone are rarely ever curable. Biopsies to determine the origin of the tumour in cases like this are rarely done.
Signs and symptoms
Clinical features of a bone tumor depend on the type of tumor and which part of which bone is affected. Symptoms and signs usually result from the pressure effect of the tumor.
There may be a lump, with or without pain. Pain may increase with the growth of the tumor and may be worse at night and at rest. A bone tumor might present with an unexplained broken bone; with little or no trauma. Additional symptoms may include fatigue, fever, weight loss, anemia and nausea. If the tumor presses a nerve, neurological signs may be present. Sometimes there are no symptoms and the tumour is found when investigating another problem.
Diagnosis
A bone tumour may be felt on examination, following which a plain X-ray is usually carried out. Blood tests might include a complete blood count, inflammatory markers, serum electrophoresis, PSA, kidney function and liver function. Urine may be sent for Bence Jones protein. Other tests that might be requested include a CT scan, MRI, PET scan and bone scintigraphy. For confirmation of diagnosis, a biopsy for histological evaluation might be required, using either a needle or by incision (open biopsy).
Staging
Treatment
Treatment of bone tumors is dependent on the type of tumor. Where available, people with bone tumors are treated at a specialist centre which have surgeons, radiologists, pathologists, oncologists and other support staff. Generally, noncancerous bone tumors may be observed for changes and surgery offered if there is pain or pressure effects on neighbouring body parts. Surgical resection with or without cytotoxic drugs may be considered.
Chemotherapy and radiotherapy
Chemotherapy and radiotherapy are effective in some tumors (such as Ewing's sarcoma) but less so in others (such as chondrosarcoma).
There is a variety of chemotherapy treatment protocols for bone tumors. The protocol with the best-reported survival in children and adults is an intra-arterial protocol where tumor response is tracked by serial arteriogram. When tumor response has reached >90% necrosis surgical intervention is planned.
Medication
One of the major concerns is bone density and bone loss. Non-hormonal bisphosphonates increase bone strength and are available as once-a-week prescription pills. Strontium-89 chloride is an intravenous medication given to help with the pain and can be given in three-month intervals.
Surgical treatment
Treatment for some bone cancers may involve surgery, such as limb amputation, or limb sparing surgery (often in combination with chemotherapy and radiation therapy). Limb sparing surgery, or limb salvage surgery, means the limb is spared from amputation. Instead of amputation, the affected bone is removed and replaced in one of two ways: (a) bone graft, in which bone is taken from elsewhere on the body or (b) artificial bone is put in. In upper leg surgeries, limb salvage prostheses are available.
There are other joint preservation surgical reconstruction options, including allograft, tumor-devitalized autograft, vascularized fibula graft, distraction osteogenesis, and custom-made implants. An analysis of massive knee replacements after resection of primary bone tumours showed patients did not score as highly on the Musculoskeletal Tumour Society Score and Knee Society Score as patients who had undergone intra-articular resection.
Thermal ablation techniques
Over the past two decades, CT guided radiofrequency ablation has emerged as a less invasive alternative to surgical resection in the care of benign bone tumors, most notably osteoid osteomas. In this technique, which can be performed under conscious sedation, a RF probe is introduced into the tumor nidus through a cannulated needle under CT guidance and heat is applied locally to destroy tumor cells. Since the procedure was first introduced for the treatment of osteoid osteomas in the early 1990s, it has been shown in numerous studies to be less invasive and expensive, to result in less bone destruction and to have equivalent safety and efficacy to surgical techniques, with 66 to 96% of patients reporting freedom from symptoms. While initial success rates with RFA are high, symptom recurrence after RFA treatment has been reported, with some studies demonstrating a recurrence rate similar to that of surgical treatment.
Thermal ablation techniques are also increasingly being used in the palliative treatment of painful metastatic bone disease. Currently, external beam radiation therapy is the standard of care for patients with localized bone pain due to metastatic disease. Although the majority of patients experience complete or partial relief of pain following radiation therapy, the effect is not immediate and has been shown in some studies to be transient in more than half of patients. For patients who are not eligible or do not respond to traditional therapies ( i.e. radiation therapy, chemotherapy, palliative surgery, bisphosphonates or analgesic medications), thermal ablation techniques have been explored as alternatives for pain reduction. Several multi-center clinical trials studying the efficacy of RFA in the treatment of moderate to severe pain in patients with metastatic bone disease have shown significant decreases in patient reported pain after treatment. These studies are limited however to patients with one or two metastatic sites; pain from multiple tumors can be difficult to localize for directed therapy. More recently, cryoablation has also been explored as a potentially effective alternative as the area of destruction created by this technique can be monitored more effectively by CT than RFA, a potential advantage when treating tumors adjacent to critical structures.
Prognosis
The outlook depends on the type of tumor. The outcome is expected to be good for people with noncancerous (benign) tumors, although some types of benign tumors may eventually become cancerous (malignant). With malignant bone tumors that have not spread, most patients achieve a cure, but the cure rate depends on the type of cancer, location, size, and other factors.
Epidemiology
Bone tumors that originate from bone are very rare and account for around 0.2% of all tumors. Average five-year survival in the United States after being diagnosed with bone and joint cancer is 67%.
History
The earliest known bone tumor was an osteosarcoma in a foot bone belonging to a person who died in Swartkrans Cave, South Africa, between 1.6 and 1.8 million years ago.
Other animals
Bones are a common site for tumors in cats and dogs.
| Biology and health sciences | Cancer | Health |
309891 | https://en.wikipedia.org/wiki/Patella | Patella | The patella (: patellae or patellas), also known as the kneecap, is a flat, rounded triangular bone which articulates with the femur (thigh bone) and covers and protects the anterior articular surface of the knee joint. The patella is found in many tetrapods, such as mice, cats, birds and dogs, but not in whales, or most reptiles.
In humans, the patella is the largest sesamoid bone (i.e., embedded within a tendon or a muscle) in the body. Babies are born with a patella of soft cartilage which begins to ossify into bone at about four years of age.
Structure
The patella is a sesamoid bone roughly triangular in shape, with the apex of the patella facing downwards. The apex is the most inferior (lowest) part of the patella. It is pointed in shape, and gives attachment to the patellar ligament.
The front and back surfaces are joined by a thin margin and towards centre by a thicker margin. The tendon of the quadriceps femoris muscle attaches to the base of the patella., with the vastus intermedius muscle attaching to the base itself, and the vastus lateralis and vastus medialis are attached to outer lateral and medial borders of patella respectively.
The upper third of the front of the patella is coarse, flattened, and rough, and serves for the attachment of the tendon of the quadriceps and often has exostoses. The middle third has numerous vascular canaliculi. The lower third culminates in the apex which serves as the origin of the patellar ligament. The posterior surface is divided into two parts.
The upper three-quarters of the patella articulates with the femur and is subdivided into a medial and a lateral facet by a vertical ledge which varies in shape.
In the adult the articular surface is about and covered by cartilage, which can reach a maximal thickness of in the centre at about 30 years of age. Owing to the great stress on the patellofemoral joint during resisted knee flexion, the articular cartilage of the patella is among the thickest in the human body.
The lower part of the posterior surface has vascular canaliculi filled and is filled by fatty tissue, the infrapatellar fat pad.
Variation
Emarginations (i.e. patella emarginata, a "missing piece") are common laterally on the proximal edge. Bipartite patellas are the result of an ossification of a second cartilaginous layer at the location of an emargination. Previously, bipartite patellas were explained as the failure of several ossification centres to fuse, but this idea has been rejected. Partite patellas occur almost exclusively in men. Tripartite and even multipartite patellas occur.
The upper three-quarters of the patella articulates with the femur and is subdivided into a medial and a lateral facet by a vertical ledge which varies in shape. Four main types of articular surface can be distinguished:
Most commonly the medial articular surface is smaller than the lateral.
Sometimes both articular surfaces are virtually equal in size.
Occasionally, the medial surface is hypoplastic or
the central ledge is only indicated.
Development
In the patella an ossification centre develops at the age of 3–6 years. The patella originates from two centres of ossification which unite when fully formed.
Function
The primary functional role of the patella is knee extension. The patella increases the leverage that the quadriceps tendon can exert on the femur by increasing the angle at which it acts.
The patella is attached to the tendon of the quadriceps femoris muscle, which contracts to extend/straighten the knee. The patella is stabilized by the insertion of the horizontal fibres of vastus medialis and by the prominence of the lateral femoral condyle, which discourages lateral dislocation during flexion. The retinacular fibres of the patella also stabilize it during exercise.
Clinical significance
Dislocation
Patellar dislocations occur with significant regularity, particularly in young female athletes. It involves the patella sliding out of its position on the knee, most often laterally, and may be associated with extremely intense pain and swelling. The patella can be tracked back into the groove with an extension of the knee, and therefore sometimes returns into the proper position on its own.
Vertical alignment
A patella alta is a high-riding (superiorly aligned) patella. An attenuated patella alta is an unusually small patella that develops out of and above the joint.
A patella baja is a low-riding patella. A long-standing patella baja may result in extensor dysfunction.
The Insall-Salvati ratio helps to indicate patella baja on lateral X-rays, and is calculated as the patellar tendon length divided by the patellar bone length. An Insall-Salvati ratio of < 0.8 indicates patella baja.
Fracture
The kneecap is prone to injury because of its particularly exposed location, and fractures of the patella commonly occur as a consequence of direct trauma onto the knee. These fractures usually cause swelling and pain in the region, bleeding into the joint (hemarthrosis), and an inability to extend the knee. Patella fractures are usually treated with surgery, unless the damage is minimal and the extensor mechanism is intact.
Exostosis
An exostosis is the formation of new bone onto a bone, as a result of excess calcium formation. This can be the cause of chronic pain when formed on the patella.
Other animals
The patella is found in placental mammals and birds; most marsupials have only rudimentary, non-ossified patellae although a few species possess a bony patella. A patella is also present in the living monotremes, the platypus and the echidna. In other tetrapods, including living amphibians and most reptiles (except some lepidosaurs), the muscle tendons from the upper leg are attached directly to the tibia, and a patella is not present. In 2017 it was discovered that frogs have kneecaps, contrary to what was thought. This raises the possibility that the kneecap arose 350 million years ago when tetrapods first appeared, but that it disappeared in some animals.
Etymology
The word patella originated in the late 17th century from the diminutive form of Latin or or paten, meaning shallow dish.
| Biology and health sciences | Skeletal system | Biology |
310008 | https://en.wikipedia.org/wiki/Katabatic%20wind | Katabatic wind | A katabatic wind (named ) is a downslope wind caused by the flow of an elevated, high-density air mass into a lower-density air mass below under the force of gravity. The spelling catabatic is also used. Since air density is strongly dependent on temperature, the high-density air mass is usually cooler, and the katabatic winds are relatively cool or cold.
Not all downslope winds are katabatic. For instance, winds such as the föhn and chinook are rain shadow winds where air driven upslope on the windward side of a mountain range drops its moisture and descends leeward drier and warmer. Examples of katabatic winds include the downslope valley and mountain breezes, the piteraq winds of Greenland, the Bora in the Adriatic, the Bohemian Wind or Böhmwind in the Ore Mountains, the Santa Ana winds in southern California, the oroshi in Japan, or "the Barber" in New Zealand.
Mechanism
A katabatic wind originates from the difference of density of two air masses located above a slope. This density difference usually comes from temperature difference, even if humidity may also play a role. Schematically katabatic winds can be divided into two types for which the mechanisms are slightly different: the katabatic winds due to radiative cooling (the most common) and the fall winds.
In the first case, the slope surface cools down radiatively after sunset, which cools down the air near the slope. This cooler air layer then flows down in the valley. This type of katabatic is very often observed during the night in the mountains. The term katabatic actually often refer to this type of wind.
In contrast, fall wind do not come from radiative cooling of the air, but rather from the advection of a relatively cold air mass to the top of a slope. This cold air mass can come from the arrival of a cold front (see Bora), or from the advection of cool marine air by a sea-breeze.
Impacts
Katabatic winds are for example found blowing out from the large and elevated ice sheets of Antarctica and Greenland. The buildup of high density cold air over the ice sheets and the elevation of the ice sheets brings into play enormous gravitational energy. Where these winds are concentrated into restricted areas in the coastal valleys, the winds blow well over hurricane force, reaching around . In Greenland these winds are called piteraq and are most intense whenever a low pressure area approaches the coast.
In a few regions of continental Antarctica the snow is scoured away by the force of the katabatic winds, leading to "dry valleys" (or "Antarctic oases") such as the McMurdo Dry Valleys. Since the katabatic winds are descending, they tend to have a low relative humidity, which desiccates the region. Other regions may have a similar but lesser effect, leading to "blue ice" areas where the snow is removed and the surface ice sublimates, but is replenished by glacier flow from upstream.
In the Fuegian Archipelago (Tierra del Fuego) in South America as well as in Alaska in North America, a wind known as a williwaw is a particular danger to harboring vessels. Williwaws originate in the snow and ice fields of the coastal mountains, and they can be faster than .
In California, strong katabatic wind events have been responsible for the explosive growth of many wildfires, including the 2018 Camp Fire and the 2020 North Complex.
In Catalonia, the Marinada is a fall wind that relieves from the heat inhabitants of the Urgell region during summer.
| Physical sciences | Winds | Earth science |
310094 | https://en.wikipedia.org/wiki/Sore%20throat | Sore throat | Sore throat, also known as throat pain, is pain or irritation of the throat. The majority of sore throats are caused by a virus, for which antibiotics are not helpful.
For sore throat caused by bacteria (GAS), treatment with antibiotics may help the person get better faster, reduce the risk that the bacterial infection spreads, prevent retropharyngeal abscesses and quinsy, and reduce the risk of other complications such as rheumatic fever and rheumatic heart disease. In most developed countries, post-streptococcal diseases have become far less common. For this reason, awareness and public health initiatives to promote minimizing the use of antibiotics for viral infections have become the focus.
Approximately 35% of childhood sore throats and 5–25% of cases in adults are caused by a bacterial infection from group A streptococcus. Sore throats that are "non-group A streptococcus" are assumed to be caused by a viral infection. Sore throat is a common reason for people to visit their primary care doctors and the top reason for antibiotic prescriptions by primary care practitioners such as family doctors. In the United States, about 1% of all visits to the hospital emergency department, physician office and medical clinics, and outpatient clinics are for sore throat (over 7 million visits for adults and 7 million visits for children per year).
Causes
Causes of sore throat include:
viral infections
group A streptococcal infection (GAS) bacterial infection
pharyngitis (inflammation of the throat)
tonsillitis (inflammation of the tonsils), or dehydration, which leads to the throat drying up.
Definition
A sore throat is pain felt anywhere in the throat.
Symptoms
Symptoms of sore throat include:
a scratchy sensation
pain during swallowing
discomfort while speaking
burning sensation
swelling in the neck
Diagnosis
The most common cause (80%) is acute viral pharyngitis, a viral infection of the throat. Other causes include other bacterial infections (such as group A streptococcus or streptococcal pharyngitis), trauma, and tumors. Gastroesophageal (acid) reflux disease can cause stomach acid to back up into the throat and also cause the throat to become sore. In children, streptococcal pharyngitis is the cause of 35–37% of sore throats.
The symptoms of a viral infection and a bacterial infection may be very similar. Some clinical guidelines suggest that the cause of a sore throat is confirmed prior to prescribing antibiotic therapy and only recommend antibiotics for children who are at high risk of non-suppurative complications. A group A streptococcus infection can be diagnosed by throat culture or a rapid test:
In order to perform a throat culture, a sample from the throat (obtained by swabbing) is cultured (grown) on a blood agar plate to confirm the presence of group A streptococcus. Throat cultures are effective for people who have a low bacterial count (high sensitivity), however, throat cultures usually take about 48 hours to obtain the results.
Rapid tests to detect GAS (bacteria) give a positive or negative result that is usually based on a colour change on a test strip that contains a throat swab (sample). Test strips detect a cell wall carbohydrate that is specific to GAS by using an immunologic reaction. Rapid testing can be performed in the doctors office and usually takes 5–10 minutes for the test strip to indicate the result. Specificity for most rapid tests is approximately 95%, however sensitivity is about 85%. Although the use of rapid testing has been linked with an overall reduction in antibiotic prescriptions, further research is necessary to understand other outcomes such as safety, and when the person starts to feel better.
Clinicians often also make treatment decisions based on the person's signs and symptoms alone. In the US, approximately two-thirds of adults and half of children with sore throat are diagnosed based on symptoms and do not have testing for the presence of GAS to confirm a bacterial infection.
Numerous clinical scoring systems (decision tools) have also been developed to support clinical decisions. Scoring systems that have been proposed include Centor's, McIsaac's, and the feverPAIN. A clinical scoring system is often used along with a rapid test. The scoring systems use observed signs and symptoms in order to determine the likelihood of a bacterial infection.
Management
Sore or scratchy throat can temporarily be relieved by gargling a solution of 1/4 to 1/2 teaspoon (1.3 to 2.5 milliliters) salt dissolved in an glass of water.
Pain medications such as non-steroidal anti-inflammatory drugs (NSAIDs) and paracetamol (acetaminophen) help in the management of pain. The use of corticosteroids seems to increase slightly the likelihood of resolution and the reduction of pain, but more analysis is necessary to ensure that this minimal benefit outweighs the risks. Antibiotics probably reduce pain, diminish headaches and could prevent some sore throat complications, but as these effects are small they must be balanced with the threat of antimicrobial resistance. It is not known whether antibiotics are effective for preventing recurrent sore throat.
There is only limited evidence that a hot drink can help alleviate a sore throat, and other common cold and influenza symptoms. If the sore throat is unrelated to a cold and is caused by, for example, tonsillitis, a cold drink may be helpful.
There are also other medications such as lozenges which can help soothe irritated tissues of the throat.
Without active treatment, symptoms usually last two to seven days.
Statistics
In the United States, there are about 2.4 million emergency department visits with throat-related complaints per year.
| Biology and health sciences | Symptoms and signs | Health |
310678 | https://en.wikipedia.org/wiki/Venus%20flytrap | Venus flytrap | The Venus flytrap (Dionaea muscipula) is a carnivorous plant native to the temperate and subtropical wetlands of North Carolina and South Carolina, on the East Coast of the United States. Although various modern hybrids have been created in cultivation, D. muscipula is the only species of the monotypic genus Dionaea. It is closely related to the waterwheel plant (Aldrovanda vesiculosa) and the cosmopolitan sundews (Drosera), all of which belong to the family Droseraceae. Dionaea catches its prey—chiefly insects and arachnids—with a "jaw"-like clamping structure, which is formed by the terminal portion of each of the plant's leaves; when an insect makes contact with the open leaves, vibrations from the prey's movements ultimately trigger the "jaws" to shut via tiny hairs (called "trigger hairs" or "sensitive hairs") on their inner surfaces. Additionally, when an insect or spider touches one of these hairs, the trap prepares to close, only fully enclosing the prey if a second hair is contacted within (approximately) twenty seconds of the first contact. Triggers may occur as quickly as of a second from initial contact.
The requirement of repeated, seemingly redundant triggering in this mechanism serves as a safeguard against energy loss and to avoid trapping objects with no nutritional value; the plant will only begin digestion after five more stimuli are activated, ensuring that it has caught a live prey animal worthy of consumption. These hairs also possess a heat sensor. A forest fire, for example, causes them to snap shut, making the plant more resilient to periods of summer fires.
Although widely cultivated for sale, the population of the Venus flytrap has been rapidly declining in its native range. As of 2017, the species was under Endangered Species Act review by the U.S. Fish & Wildlife Service.
Etymology
The plant's common name (originally "Venus's flytrap") refers to Venus, the Roman goddess of love. The genus name, Dionaea ("daughter of Dione"), refers to the Greek goddess Aphrodite, while the species name, muscipula, is Latin for both "mousetrap" and "flytrap". The Latin word ("mousetrap") is derived from mus ("mouse") and ("trap"), while the homonym word ("flytrap") is derived from ("fly") and .
Historically, the plant was also known by the slang term "tipitiwitchet" or "tippity twitchet", possibly an oblique reference to the plant's resemblance to human female genitalia. The term is similar to the term tippet-de-witchet which derives from tippet and witchet (archaic term for vagina). In contrast, the English botanist John Ellis, who gave the plant its scientific name in 1768, wrote that the plant name tippitywichit was an indigenous word from either Cherokee or Catawba. The plant name according to the Handbook of American Indians derives from the Renape word titipiwitshik ("they (leaves) which wind around (or involve)").
Discovery by Europeans
On 2 April 1759, the North Carolina colonial governor, Arthur Dobbs, penned the first written description of the plant in a letter to English botanist Peter Collinson. In the letter he wrote: "We have a kind of Catch Fly Sensitive which closes upon anything that touches it. It grows in Latitude 34 but not in 35. I will try to save the seed here." A year later, Dobbs went into greater detail about the plant in a letter to Collinson dated Brunswick, 24 January 1760.
This was the first detailed recorded notice of the plant by Europeans. The description was before John Ellis' letter to The London Magazine on 1 September 1768, and his letter to Carl Linnaeus on 23 September 1768, in which he described the plant and proposed its English name Venus's Flytrap and scientific name Dionaea muscipula.
Description
The Venus flytrap is a small plant whose structure can be described as a rosette of four to seven leaves, which arise from a short subterranean stem that is actually a bulb-like object. Each stem reaches a maximum size of about three to ten centimeters, depending on the time of year; longer leaves with robust traps are usually formed after flowering. Flytraps that have more than seven leaves are colonies formed by rosettes that have divided beneath the ground.
Fly trap leaves
The leaf blade is divided into two regions: a flat, heart-shaped photosynthesis-capable petiole, and a pair of terminal lobes hinged at the midrib, forming the trap which is the true leaf. The upper surface of these lobes contains red anthocyanin pigments and its edges secrete mucilage. The lobes exhibit rapid plant movements, snapping shut when stimulated by prey. The trapping mechanism is tripped when prey contacts one of the three hair-like trichomes that are found on the upper surface of each of the lobes. The mechanism is so highly specialized that it can distinguish between living prey and non-prey stimuli, such as falling raindrops; two trigger hairs must be touched in succession within 20 seconds of each other or one hair touched twice in rapid succession, whereupon the lobes of the trap will snap shut, typically in about one-tenth of a second. The edges of the lobes are fringed by stiff hair-like protrusions or cilia, which mesh together and prevent large prey from escaping. These protrusions, and the trigger hairs (also known as sensitive hairs) are likely homologous with the tentacles found in this plant's close relatives, the sundews. Scientists have concluded that the snap trap evolved from a fly-paper trap similar to that of Drosera.
The holes in the meshwork allow small prey to escape, presumably because the benefit that would be obtained from them would be less than the cost of digesting them. If the prey is too small and escapes, the trap will usually reopen within 12 hours. If the prey moves around in the trap, it tightens and digestion begins more quickly.
Speed of closing can vary depending on the amount of humidity, light, size of prey, and general growing conditions. The speed with which traps close can be used as an indicator of a plant's general health. Venus flytraps are not as humidity-dependent as are some other carnivorous plants, such as Nepenthes, Cephalotus, most Heliamphora, and some Drosera.
The Venus flytrap exhibits variations in petiole shape and length and whether the leaf lies flat on the ground or extends up at an angle of about 40–60 degrees. The four major forms are: 'typica', the most common, with broad decumbent petioles; 'erecta', with leaves at a 45-degree angle; 'linearis', with narrow petioles and leaves at 45 degrees; and 'filiformis', with extremely narrow or linear petioles. Except for 'filiformis', all of these can be stages in leaf production of any plant depending on season (decumbent in summer versus short versus semi-erect in spring), length of photoperiod (long petioles in spring versus short in summer), and intensity of light (wide petioles in low light intensity versus narrow in brighter light).
Other parts
The plant also has a flower on top of a long stem, about long. The flower is pollinated from various flying insects such as sweat bees, longhorn beetles and checkered beetles.
Habitat and distribution
Habitat
The Venus flytrap is found in nitrogen- and phosphorus-poor environments, such as bogs, wet savannahs, and canebrakes. Small in stature and slow-growing, the Venus flytrap tolerates fire well and depends on periodic burning to suppress its competition. Fire suppression threatens its future in the wild. It survives in wet sandy and peaty soils. Although it has been successfully transplanted and grown in many locales around the world, it is native only to the coastal bogs of North and South Carolina in the United States, specifically within a 100-kilometer (60 mi) radius of Wilmington, North Carolina. One such place is North Carolina's Green Swamp. There also appears to be a naturalized population of Venus flytraps in northern Florida as well as an introduced population in western Washington. The nutritional poverty of the soil is the reason it relies on such elaborate traps: insect prey provide the nitrogen for protein formation that the soil cannot. They tolerate mild winters, and require a period of winter dormancy to survive freezing temperatures and low photoperiods. It is a common misconception that Venus flytraps require dormancy if kept indoors under sufficient artificial light. However, most professional carnivorous plant growers recommend dormancy, and Venus fly traps grown without dormancy may require more light, water, and food to remain healthy.
They are full sun plants, usually found only in areas with less than 10% canopy cover. The habitats where it thrives are typically either too nutrient-poor for many noncarnivorous plants to survive, or frequently disturbed by fires which regularly clear vegetation and prevent a shady overstory from developing. It can be found living alongside herbaceous plants, grasses, sphagnum, and fire-dependent Arundinaria bamboos. Regular fire disturbance is an important part of its habitat, required every 3–5 years in most places for D. muscipula to thrive. After fire, D. muscipula seeds germinate well in ash and sandy soil, with seedlings growing well in the open post-fire conditions. The seeds germinate immediately without a dormant period.
Distribution
Dionaea muscipula occurs naturally only along the coastal plain of North and South Carolina in the U.S., with all known current sites within of Wilmington, North Carolina. A 1958 survey of herbaria specimens and old documents found 259 sites where the historical record documented the presence of D. muscipula, within 21 counties in North and South Carolina. As of 2019, it was considered extirpated in North Carolina in the inland counties of Moore, Robeson, and Lenoir, as well as the South Carolina coastal counties of Charleston and Georgetown. Remaining extant populations exist in North Carolina in Beaufort, Craven, Pamlico, Carteret, Jones, Onslow, Duplin, Pender, New Hanover, Brunswick, Columbus, Bladen, Sampson, Cumberland, and Hoke counties, and in South Carolina in Horry county.
Population
A large-scale survey in 2019, conducted by the North Carolina Natural Heritage Program, counted a total of 163,951 individual Venus flytraps in North Carolina and 4,876 in South Carolina, estimating a total of 302,000 individuals remaining in the wild in its native range. This represents a reduction of more than 93% from a 1979 estimate of approximately 4,500,000 individuals. A 1958 study found 259 confirmed extant or historic sites. As of 2016, there were 71 known sites where the plant could be found in the wild. Of these 71 sites, only 20 were classified as having excellent or good long-term viability.
Carnivory
Prey selectivity
Most carnivorous plants selectively feed on specific prey. This selection is due to the available prey and the type of trap used by the organism. With the Venus flytrap, prey is limited to beetles, spiders and other crawling arthropods. The Dionaea diet is 33% ants, 30% spiders, 10% beetles, and 10% grasshoppers, with fewer than 5% flying insects.
Given that Dionaea evolved from an ancestral form of Drosera (carnivorous plants that use a sticky trap instead of a snap trap) the reason for this evolutionary branching becomes clear. Drosera consume smaller, aerial insects, whereas Dionaea consume larger terrestrial bugs. Dionaea are able to extract more nutrients from these larger bugs. This gives Dionaea an evolutionary advantage over their ancestral sticky trap form.
Mechanism of trapping
The Venus flytrap is one of a very small group of plants capable of rapid movement, such as Mimosa pudica, the telegraph plant, starfruit, sundews and bladderworts.
The mechanism by which the trap snaps shut involves a complex interaction between elasticity, turgor and growth. The trap only shuts when there have been two stimulations of the trigger hairs; this is to avoid inadvertent triggering of the mechanism by dust and other wind-borne debris. In the open, untripped state, the lobes are convex (bent outwards), but in the closed state, the lobes are concave (forming a cavity). It is the rapid flipping of this bistable state that closes the trap, but the mechanism by which this occurs is still poorly understood. When the trigger hairs are stimulated, an action potential (mostly involving calcium ions—see calcium in biology) is generated, which propagates across the lobes and stimulates cells in the lobes and in the midrib between them.
It is hypothesized that there is a threshold of ion buildup for the Venus flytrap to react to stimulation. The acid growth theory states that individual cells in the outer layers of the lobes and midrib rapidly move 1H+ (hydrogen ions) into their cell walls, lowering the pH and loosening the extracellular components, which allows them to swell rapidly by osmosis, thus elongating and changing the shape of the trap lobe. Alternatively, cells in the inner layers of the lobes and midrib may rapidly secrete other ions, allowing water to follow by osmosis, and the cells to collapse. Both of these mechanisms may play a role and have some experimental evidence to support them.
Flytraps show an example of memory in plants; the plant knows if one of its trigger hairs have been touched, and remembers this for a few seconds. If a second touch occurs during that time frame, the flytrap closes. After closing, the flytrap counts additional stimulations of the trigger hairs, to five total, to start the production of digesting enzymes.
Digestion
If the prey is unable to escape, it will continue to stimulate the inner surface of the lobes, and this causes a further growth response that forces the edges of the lobes together, eventually sealing the trap hermetically and forming a "stomach" in which digestion occurs. Release of the digestive enzymes is controlled by the hormone jasmonic acid, the same hormone that triggers the release of toxins as an anti-herbivore defense mechanism in non-carnivorous plants. (See Evolution below) Once the digestive glands in the leaf lobes have been activated, digestion is catalysed by hydrolase enzymes secreted by the glands. One of these enzymes includes GH18 chitinase, which breaks down chitin-containing exoskeleton of trapped insects. Synthesis of this enzyme begins with at least five action potentials, which will stimulate transcription of chitinase.
Oxidative protein modification is likely to be a pre-digestive mechanism used by Dionaea muscipula. Aqueous leaf extracts have been found to contain quinones such as the naphthoquinone plumbagin that couples to different NADH-dependent diaphorases to produce superoxide and hydrogen peroxide upon autoxidation. Such oxidative modification could rupture animal cell membranes. Plumbagin is known to induce apoptosis, associated with the regulation of the Bcl-2 family of proteins. When the Dionaea extracts were pre-incubated with diaphorases and NADH in the presence of serum albumin (SA), subsequent tryptic digestion of SA was facilitated. Since the secretory glands of Droseraceae contain proteases and possibly other degradative enzymes, it may be that the presence of oxygen-activating redox cofactors function as extracellular pre-digestive oxidants to render membrane-bound proteins of the prey (insects) more susceptible to proteolytic attacks.
Digestion takes about ten days, after which the prey is reduced to a husk of chitin. The trap then reopens, and is ready for reuse.
Evolution
Carnivory in plants is a very specialized form of foliar feeding, and is an adaptation found in several plants that grow in nutrient-poor soil. Carnivorous traps were naturally selected to allow these organisms to compensate for the nutrient deficiencies of their harsh environments and compensate for the reduced photosynthetic benefit. Phylogenetic studies have shown that carnivory in plants is a common adaptation in habitats with abundant sunlight and water but scarce nutrients. Carnivory has evolved independently six times in the angiosperms based on extant species, with likely many more carnivorous plant lineages now extinct.
The "snap trap" mechanism characteristic of Dionaea is shared with only one other carnivorous plant genus, Aldrovanda. For most of the 20th century, this relationship was thought to be coincidental, more precisely an example of convergent evolution. Some phylogenetic studies even suggested that the closest living relatives of Aldrovanda were the sundews. It was not until 2002 that a molecular evolutionary study, by analyzing combined nuclear and chloroplast DNA sequences, indicated that Dionaea and Aldrovanda were closely related and that the snap trap mechanism evolved only once in a common ancestor of the two genera.
A 2009 study presented evidence for the evolution of snap traps of Dionaea and Aldrovanda from a flypaper trap like Drosera regia, based on molecular data. The molecular and physiological data imply that Dionaea and Aldrovanda snap traps evolved from the flypaper traps of a common ancestor with Drosera. Pre-adaptations to the evolution of snap traps were identified in several species of Drosera, such as rapid leaf and tentacle movement. The model proposes that plant carnivory by snap trap evolved from the flypaper traps, driven by increasing prey size. Bigger prey provides greater nutritional value, but large insects can easily escape the sticky mucilage of flypaper traps; the evolution of snap traps would therefore prevent escape and kleptoparasitism (theft of prey captured by the plant before it can derive benefit from it), and would also permit a more complete digestion.
In 2016, a study of the expression of genes in the plant's leaves as they captured and digested prey was published in the journal, Genome Research. The gene activation observed in the leaves of the plants gives support to the hypothesis that the carnivorous mechanisms present in the flytrap are a specially adapted version of mechanisms used by non-carnivorous plants to defend against herbivorous insects. In many non-carnivorous plants, jasmonic acid serves as a signaling molecule for the activation of defense mechanisms, such as the production of hydrolases, which can destroy chitin and other molecular components of insect and microbial pests. In the Venus flytrap, this same molecule has been found to be responsible for the activation of the plant's digestive glands. A few hours after the capture of prey, another set of genes is activated inside the glands, the same set of genes that is active in the roots of other plants, allowing them to absorb nutrients. The use of similar biological pathways in the traps as non-carnivorous plants use for other purposes indicates that somewhere in its evolutionary history, the Venus flytrap repurposed these genes to facilitate carnivory.
Proposed evolutionary history
Carnivorous plants are generally herbaceous, and their traps the result of primary growth. They generally do not form readily fossilizable structures such as thick bark or wood. As a result, there is no fossil evidence of the steps that might link Dionaea and Aldrovanda, or either genus with their common ancestor, Drosera. Nevertheless, it is possible to infer an evolutionary history based on phylogenetic studies of both genera. Researchers have proposed a series of steps that would ultimately result in the complex snap-trap mechanism:
Larger insects usually walk over the plant, instead of flying to it, and are more likely to break free from sticky glands alone. Therefore, a plant with wider leaves, like Drosera falconeri, must have adapted to move the trap and its stalks in directions that maximized its chance of capturing and retaining such prey—in this particular case, longitudinally. Once adequately "wrapped", escape would be more difficult.
Evolutionary pressure then selected for plants with shorter response time, in a manner similar to Drosera burmanni or Drosera glanduligera. The faster the closing, the less reliant on the flypaper model the plant would be.
As the trap became more and more active, the energy required to "wrap" the prey increased. Plants that could somehow differentiate between actual insects and random detritus/rain droplets would have an advantage, thus explaining the specialization of inner tentacles into trigger hairs.
Ultimately, as the plant relied more on closing around the insect rather than gluing them to the leaf surface, the tentacles so evident in Drosera would lose their original function altogether, becoming the "teeth" and trigger hairs—an example of natural selection utilizing pre-existing structures for new functions.
Completing the transition, the plant eventually developed the depressed digestive glands found inside the trap, rather than using the dews in the stalks, further differentiating it from genus Drosera.
Phylogenetic studies using molecular characters place the emergence of carnivory in the ancestors of Dionaea muscipula to 85.6 million years ago, and the development of the snap-trap in the ancestors of Dionaea and its sister genus Aldrovanda to approximately 48 million years ago.
Cultivation
Plants can be propagated by seed, taking around four to five years to reach maturity. More commonly, they are propagated by clonal division in spring or summer. Venus flytraps can also be propagated in vitro using plant tissue culture. Most Venus flytraps found for sale in nurseries garden centers have been produced using this method, as this is the most cost-effective way to propagate them on a large scale. Regardless of the propagation method used, the plants will live for 20 to 30 years if cultivated in the right conditions.
Cultivars
Venus flytraps are by far the most commonly recognized and cultivated carnivorous plant, and they are frequently sold as houseplants. Various cultivars (cultivated varieties) have come into the market through tissue culture of selected genetic mutations, and these plants are raised in large quantities for commercial markets. The cultivars 'Akai Ryu' and 'South West Giant' have gained the Royal Horticultural Society's Award of Garden Merit.
Conservation
Although widely cultivated for sale as a houseplant, D. muscipula has suffered a significant decline in its population in the wild. The population in its native range is estimated to have decreased 93% since 1979.
Status
The species is under Endangered Species Act review by the U.S. Fish & Wildlife Service. The current review commenced in 2018, after an initial "90-day" review found that action may be warranted. A previous review in 1993 resulted in a determination that the plant was a "Potential candidate without sufficient data on vulnerability". The IUCN Red List classifies the species as "vulnerable". The State of North Carolina lists Dionaea muscipula as a species of "Special Concern–Vulnerable". The species is protected under Appendix II of the Convention on International Trade in Endangered Species (CITES) meaning international trade (including in parts and derivatives) is regulated by the CITES permitting system. NatureServe classified it as "Imperiled" (G2) in a 2018 review.
The U.S. Fish and Wildlife Service has not indicated a timeline to conclude its current review of Dionaea muscipula. The Endangered Species Act specifies a two-year timeline for a species review. However, the species listing process takes 12.1 years on average.
Threats
The Venus flytrap is only found in the wild in a very particular set of conditions, requiring flat land with moist, acidic, nutrient-poor soils that receive full sun and burn frequently in forest fires, and is therefore sensitive to many types of disturbance. A 2011 review identified five categories of threats for the species: agriculture, road-building, biological resource use (poaching and lumber activities), natural systems modifications (drainage and fire suppression), and pollution (fertilizer).
Habitat loss is a major threat to the species. The human population of the coastal Carolinas is rapidly expanding. For example, Brunswick County, North Carolina, which has the largest number of Venus flytrap populations, has seen a 27% increase in its human population from 2010 to 2018. As the population grows, residential and commercial development and road building directly eliminate flytrap habitat, while site preparation that entails ditching and draining can dry out soil in surrounding areas, destroying the viability of the species. Additionally, increased recreational use of natural areas in populated areas directly destroys the plants by crushing or uprooting them.
Fire suppression is another threat to the Venus flytrap. In the absence of regular fires, shrubs and trees encroach, outcompeting the species and leading to local extirpations. D. muscipula requires fire every 3–5 years, and best thrives with annual brush fires. Although flytraps and their seeds are typically killed alongside their competition in fires, seeds from flytraps adjacent to the burnt zone propagate quickly in the ash and full sun conditions that occur after a fire disturbance. Because the mature plants and new seedlings are typically destroyed in the regular fires that are necessary to maintain their habitat, D. muscipula'''s survival relies upon adequate seed production and dispersal from outside the burnt patches back into the burnt habitat, requiring a critical mass of populations, and exposing the success of any one population to metapopulation dynamics. These dynamics make small, isolated populations particularly vulnerable to extirpation, for if there are no mature plants adjacent to the fire zone, there is no source of seeds post-fire.
Poaching has been another cause of population decline. Harvesting Venus flytraps on public land became illegal in North Carolina in 1958, and since then a legal cultivation industry has formed, growing tens of thousands of flytraps in commercial greenhouses for sale as household plants. Yet in 2016, the New York Times'' reported that demand for wild plants still exists, which "has led to a 'Venus flytrap crime ring. In 2014, the state of North Carolina made Venus flytrap poaching a felony. Since then, several poachers have been charged, with one man receiving 17 months in prison for poaching 970 Venus flytraps, and another man charged with 73 felony counts in 2019. Poachers may do greater harm to the wild populations than a simple count of individuals taken would indicate, as they may selectively harvest the largest plants at a site, which have more flowers and fruit and therefore generate more seeds than smaller plants.
Additionally, the species is particularly vulnerable to catastrophic climate events. Most Venus flytrap sites are only 2–4 meters (6.5 –13 feet) above sea level and are located in a region prone to hurricanes, making storm surges and rising sea levels a long-term threat.
Designations
In 2005, the Venus flytrap was designated as the state carnivorous plant of North Carolina.
In alternative medicine
Venus flytrap extract is available on the market as an herbal remedy, sometimes as the prime ingredient of a patent medicine named "Carnivora". According to the American Cancer Society, these products are promoted in alternative medicine as a treatment for a variety of human ailments including HIV, Crohn's disease and skin cancer, even though available scientific evidence does not support these health claims.
| Biology and health sciences | Caryophyllales | Plants |
310782 | https://en.wikipedia.org/wiki/Genetic%20testing | Genetic testing | Genetic testing, also known as DNA testing, is used to identify changes in DNA sequence or chromosome structure. Genetic testing can also include measuring the results of genetic changes, such as RNA analysis as an output of gene expression, or through biochemical analysis to measure specific protein output. In a medical setting, genetic testing can be used to diagnose or rule out suspected genetic disorders, predict risks for specific conditions, or gain information that can be used to customize medical treatments based on an individual's genetic makeup. Genetic testing can also be used to determine biological relatives, such as a child's biological parentage (genetic mother and father) through DNA paternity testing, or be used to broadly predict an individual's ancestry. Genetic testing of plants and animals can be used for similar reasons as in humans (e.g. to assess relatedness/ancestry or predict/diagnose genetic disorders), to gain information used for selective breeding, or for efforts to boost genetic diversity in endangered populations.
The variety of genetic tests has expanded throughout the years. Early forms of genetic testing which began in the 1950s involved counting the number of chromosomes per cell. Deviations from the expected number of chromosomes (46 in humans) could lead to a diagnosis of certain genetic conditions such as trisomy 21 (Down syndrome) or monosomy X (Turner syndrome). In the 1970s, a method to stain specific regions of chromosomes, called chromosome banding, was developed that allowed more detailed analysis of chromosome structure and diagnosis of genetic disorders that involved large structural rearrangements. In addition to analyzing whole chromosomes (cytogenetics), genetic testing has expanded to include the fields of molecular genetics and genomics which can identify changes at the level of individual genes, parts of genes, or even single nucleotide "letters" of DNA sequence. According to the National Institutes of Health, there are tests available for more than 2,000 genetic conditions, and one study estimated that as of 2018 there were more than 68,000 genetic tests on the market.
Types
Genetic testing is "the analysis of chromosomes (DNA), proteins, and certain metabolites in order to detect heritable disease-related genotypes, mutations, phenotypes, or karyotypes for clinical purposes." It can provide information about a person's genes and chromosomes throughout life.
Diagnostic testing
Cell-free fetal DNA (cffDNA) testing a non-invasive (for the fetus) test. It is performed on a sample of venous blood from the mother, and can provide information about the fetus early in pregnancy. it is the most sensitive and specific screening test for Down syndrome.
Newborn screening used just after birth to identify genetic disorders that can be treated early in life. A blood sample is collected with a heel prick from the newborn 24–48 hours after birth and sent to the lab for analysis. In the United States, newborn screening procedure varies state by state, but all states by law test for at least 21 disorders. If abnormal results are obtained, it does not necessarily mean the child has the disorder. Diagnostic tests must follow the initial screening to confirm the disease. The routine testing of infants for certain disorders is the most widespread use of genetic testing—millions of babies are tested each year in the United States. All states currently test infants for phenylketonuria (PKU, a genetic disorder that causes mental illness if left untreated) and congenital hypothyroidism (a disorder of the thyroid gland). People with PKU do not have an enzyme needed to process the amino acid phenylalanine, which is responsible for normal growth in children and normal protein use throughout their lifetime. If there is a buildup of too much phenylalanine, brain tissue can be damaged, causing developmental delay. Newborn screening can detect the presence of PKU, allowing children to be placed on special diets to avoid the effects of the disorder.
Diagnostic testing used to diagnose or rule out a specific genetic or chromosomal condition. In many cases, genetic testing is used to confirm a diagnosis when a particular condition is suspected based on physical mutations and symptoms. Diagnostic testing can be performed at any time during a person's life, but is not available for all genes or all genetic conditions. The results of a diagnostic test can influence a person's choices about health care and the management of the disease. For example, people with a family history of polycystic kidney disease (PKD) who experience pain or tenderness in their abdomen, blood in their urine, frequent urination, pain in the sides, a urinary tract infection or kidney stones may decide to have their genes tested and the result could confirm the diagnosis of PKD. Despite the several implications of genetic testing in conditions such as epilepsy or neurodevelopmental disorders, many patients (specially adults) do not have access to these modern diagnostic approaches, showing a relevant diagnostic gap.
Carrier testing used to identify people who carry one copy of a gene mutation that, when present in two copies, causes a genetic disorder. This type of testing is offered to individuals who have a family history of a genetic disorder and to people in ethnic groups with an increased risk of specific genetic conditions. If both parents are tested, the test can provide information about a couple's risk of having a child with a genetic condition like cystic fibrosis.
Preimplantation genetic diagnosis performed on human embryos prior to the implantation as part of an in vitro fertilization procedure. Pre-implantation testing is used when individuals try to conceive a child through in vitro fertilization. Eggs from the woman and sperm from the man are removed and fertilized outside the body to create multiple embryos. The embryos are individually screened for abnormalities, and the ones without abnormalities are implanted in the uterus.
Prenatal diagnosis used to detect changes in a fetus's genes or chromosomes before birth. This type of testing is offered to couples with an increased risk of having a baby with a genetic or chromosomal disorder. In some cases, prenatal testing can lessen a couple's uncertainty or help them decide whether to abort the pregnancy. It cannot identify all possible inherited disorders and birth defects, however. One method of performing a prenatal genetic test involves an amniocentesis, which removes a sample of fluid from the mother's amniotic sac 15 to 20 or more weeks into pregnancy. The fluid is then tested for chromosomal abnormalities such as Down syndrome (trisomy 21) and trisomy 18, which can result in neonatal or fetal death. Test results can be retrieved within 7–14 days after the test is done. This method is 99.4% accurate at detecting and diagnosing fetal chromosome abnormalities. There is a slight risk of miscarriage with this test, about 1:400. Another method of prenatal testing is chorionic villus sampling (CVS). Chorionic villi are projections from the placenta that carry the same genetic makeup as the baby. During this method of prenatal testing, a sample of chorionic villi is removed from the placenta to be tested. This test is performed 10–13 weeks into pregnancy and results are ready 7–14 days after the test was done. Another test using blood taken from the fetal umbilical cord is percutaneous umbilical cord blood sampling.
Predictive and presymptomatic testing used to detect gene mutations associated with disorders that appear after birth, often later in life. These tests can be helpful to people who have a family member with a genetic disorder, but who have no features of the disorder themselves at the time of testing. Predictive testing can identify mutations that increase a person's chances of developing disorders with a genetic basis, such as certain types of cancer. For example, an individual with a mutation in BRCA1 has a 65% cumulative risk of breast cancer. Hereditary breast cancer along with ovarian cancer syndrome are caused by gene alterations in the genes BRCA1 and BRCA2. Major cancer types related to mutations in these genes are female breast cancer, ovarian, prostate, pancreatic, and male breast cancer. Li-Fraumeni syndrome is caused by a gene alteration on the gene TP53. Cancer types associated with a mutation on this gene include breast cancer, soft tissue sarcoma, osteosarcoma (bone cancer), leukemia and brain tumors. In the Cowden syndrome there is a mutation on the PTEN gene, causing potential breast, thyroid or endometrial cancer. Presymptomatic testing can determine whether a person will develop a genetic disorder, such as hemochromatosis (an iron overload disorder), before any signs or symptoms appear. The results of predictive and presymptomatic testing can provide information about a person's risk of developing a specific disorder, help with making decisions about medical care and provide a better prognosis.
Pharmacogenomics determines the influence of genetic variation on drug response. When a person has a disease or health condition, pharmacogenomics can examine an individual's genetic makeup to determine what medicine and what dosage would be the safest and most beneficial to the patient. In the human population, there are approximately 11 million single nucleotide polymorphisms (SNPs) in people's genomes, making them the most common variations in the human genome. SNPs reveal information about an individual's response to certain drugs. This type of genetic testing can be used for cancer patients undergoing chemotherapy. A sample of the cancer tissue can be sent in for genetic analysis by a specialized lab. After analysis, information retrieved can identify mutations in the tumor which can be used to determine the best treatment option.
Non-diagnostic testing
Forensic testing uses DNA sequences to identify an individual for legal purposes. Unlike the tests described above, forensic testing is not used to detect gene mutations associated with disease. This type of testing can identify crime or catastrophe victims, rule out or implicate a crime suspect, or establish biological relationships between people (for example, paternity).
Paternity testing uses special DNA markers to identify the same or similar inheritance patterns between related individuals. Based on the fact that we all inherit half of our DNA from the father, and half from the mother, DNA scientists test individuals to find the match of DNA sequences at some highly differential markers to draw the conclusion of relatedness.
Genealogical DNA test used to determine ancestry or ethnic heritage for genetic genealogy.
Research testing includes finding unknown genes, learning how genes work and advancing understanding of genetic conditions. The results of testing done as part of a research study are usually not available to patients or their healthcare providers.
Medical procedure
Genetic testing is often done as part of a genetic consultation and as of mid-2008 there were more than 1,200 clinically applicable genetic tests available. Once a person decides to proceed with genetic testing, a medical geneticist, genetic counselor, primary care doctor, or specialist can order the test after obtaining informed consent.
Genetic tests are performed on a sample of blood, hair, skin, amniotic fluid (the fluid that surrounds a fetus during pregnancy), or other tissue. For example, a medical procedure called a buccal smear uses a small brush or cotton swab to collect a sample of cells from the inside surface of the cheek. Alternatively, a small amount of saline mouthwash may be swished in the mouth to collect the cells. The sample is sent to a laboratory where technicians look for specific changes in chromosomes, DNA, or proteins, depending on the suspected disorders, often using DNA sequencing. The laboratory reports the test results in writing to a person's doctor or genetic counselor.
Routine newborn screening tests are done on a small blood sample obtained by pricking the baby's heel with a lancet.
Risks and limitations
The physical risks associated with most genetic tests are very small, particularly for those tests that require only a blood sample or buccal smear (a procedure that samples cells from the inside surface of the cheek). The procedures used for prenatal testing carry a small but non-negligible risk of losing the pregnancy (miscarriage) because they require a sample of amniotic fluid or tissue from around the fetus.
Many of the risks associated with genetic testing involve the emotional, social, or financial consequences of the test results. People may feel angry, depressed, anxious, or guilty about their results. The potential negative impact of genetic testing has led to an increasing recognition of a "right not to know". In some cases, genetic testing creates tension within a family because the results can reveal information about other family members in addition to the person who is tested. The possibility of genetic discrimination in employment or insurance is also a concern. Some individuals avoid genetic testing out of fear it will affect their ability to purchase insurance or find a job. Health insurers do not currently require applicants for coverage to undergo genetic testing, and when insurers encounter genetic information, it is subject to the same confidentiality protections as any other sensitive health information. In the United States, the use of genetic information is governed by the Genetic Information Nondiscrimination Act (GINA) (see discussion below in the section on government regulation).
Genetic testing can provide only limited information about an inherited condition. The test often can't determine if a person will show symptoms of a disorder, how severe the symptoms will be, or whether the disorder will progress over time. Another major limitation is the lack of treatment strategies for many genetic disorders once they are diagnosed.
Another limitation to genetic testing for a hereditary linked cancer, is the variants of unknown clinical significance. Because the human genome has over 22,000 genes, there are 3.5 million variants in the average person's genome. These variants of unknown clinical significance means there is a change in the DNA sequence, however the increase for cancer is unclear because it is unknown if the change affects the gene's function.
A genetics professional can explain in detail the benefits, risks, and limitations of a particular test. It is important that any person who is considering genetic testing understand and weigh these factors before making a decision.
Other risks include incidental findings—a discovery of some possible problem found while looking for something else. In 2013 the American College of Medical Genetics and Genomics (ACMG) recommended that certain genes always be included any time a genomic sequencing was done, and that labs should report the results.
DNA studies have been criticised for a range of methodological problems and providing misleading, interpretations on racial classifications.
Direct-to-consumer genetic testing
Direct-to-consumer (DTC) genetic testing (also called at-home genetic testing) is a type of genetic test that is accessible directly to the consumer without having to go through a health care professional. Usually, to obtain a genetic test, health care professionals such as physicians, nurse practitioners, or genetic counselors acquire their patient's permission and then order the desired test, which may or may not be covered by health insurance. DTC genetic tests, however, allow consumers to bypass this process and purchase DNA tests themselves. DTC genetic testing can entail primarily genealogical/ancestry-related information, health and trait-related information, or both. Genetic testing has been taken on by private companies, such as 23andMe, Ancestry.com, and Family Tree DNA. These companies will send the consumer a kit at their home address, with which they will provide a saliva sample for their lab to analyze. The company will then send back the consumer's results in a few weeks, which is a breakdown of their ancestral heritage and possible health risks that accompany it.
There are a variety of DTC genetic tests, ranging from tests for breast cancer alleles to mutations linked to cystic fibrosis. Possible benefits of DTC genetic testing are the accessibility of tests to consumers, promotion of proactive healthcare, and the privacy of genetic information. Possible additional risks of DTC genetic testing are the lack of governmental regulation, the potential misinterpretation of genetic information, issues related to testing minors, privacy of data, and downstream expenses for the public health care system. In the United States, most DTC genetic test kits are not reviewed by the Food and Drug Administration (FDA), with the exception of a few tests offered by the company 23andMe. As of 2019, the tests that have received marketing authorization by the FDA include 23andMe's genetic health risk reports for select variants of BRCA1/BRCA2, pharmacogenetic reports that test for selected variants associated with metabolism of certain pharmaceutical compounds, a carrier screening test for Bloom syndrome, and genetic health risk reports for a handful of other medical conditions, such as celiac disease and late-onset Alzheimer's.
Controversy
DTC genetic testing has been controversial due to outspoken opposition within the medical community. Critics of DTC genetic testing argue against the risks involved in several steps of the testing process, such as the unregulated advertising and marketing claims, the potential reselling of genetic data to third parties, and the overall lack of governmental oversight.
DTC genetic testing involves many of the same risks associated with any genetic test. One of the more obvious and dangerous of these is the possibility of misreading of test results. Without professional guidance, consumers can potentially misinterpret genetic information, causing them to be deluded about their personal health.
Some advertising for DTC genetic testing has been criticized as conveying an exaggerated and inaccurate message about the connection between genetic information and disease risk, utilizing emotions as a selling factor. An advertisement for a BRCA-predictive genetic test for breast cancer stated: "There is no stronger antidote for fear than information." Apart from rare diseases that are directly caused by specific, single-gene mutation, diseases "have complicated, multiple genetic links that interact strongly with personal environment, lifestyle, and behavior."
Ancestry.com, a company providing DTC DNA tests for genealogy purposes, has reportedly allowed the warrantless search of their database by police investigating a murder. The warrantless search led to a search warrant to force the gathering of a DNA sample from a New Orleans filmmaker; however he turned out not to be a match for the suspected killer.
Governmental genetic testing
In Estonia
As part of its healthcare system, Estonia is offering all of its residents genome-wide genotyping. This will be translated into personalized reports for use in everyday medical practice via the national e-health portal.
The aim is to minimise health problems by warning participants most at risk of conditions such as cardiovascular disease and diabetes. It is also hoped that participants who are given early warnings will adopt healthier lifestyles or take preventative drugs.
The Genographic Project
In 2005, National Geographic launched the "Genographic Project", which was a fifteen-year project that was discontinued in 2020. Over one million people participated in the DNA sampling from more than 140 countries, which made the project the largest of its kind ever conducted. The project asked for DNA samples from indigenous people as well as the general public, which spurred political controversy among some indigenous groups, leading to the coining of the term "biocolonialism".
Government regulation
In the United States
With regard to genetic testing and information in general, legislation in the United States called the Genetic Information Nondiscrimination Act prohibits group health plans and health insurers from denying coverage to a healthy person or charging that person higher premiums based solely on a genetic predisposition to developing a disease in the future. The legislation also bars employers from using genetic information when making hiring, firing, job placement, or promotion decisions.
Although GINA protects against genetic discrimination, Section 210 of the law states that once the disease has manifested, employers can use the medical information and not be in violation of the law, even if the condition has a genetic basis. The legislation, the first of its kind in the United States, was passed by the United States Senate on April 24, 2008, on a vote of 95–0, and was signed into law by President George W. Bush on May 21, 2008. It went into effect on November 21, 2009.
In June 2013 the US Supreme Court issued two rulings on human genetics. The Court struck down patents on human genes, opening up competition in the field of genetic testing. The Supreme Court also ruled that police were allowed to collect DNA from people arrested for serious offenses.
In the European Union
Effective as of 25 May 2018, companies that process genetic data must abide by the General Data Protection Regulation (GDPR). The GDPR is a set of rules/regulations that helps an individual take control of their data that is collected, used, and stored digitally or in a structured filing system on paper, and restricts a company's use of personal data. The regulation also applies to companies that offer products/services outside the EU.
In Germany
Genetic testing in Germany is governed by the Genetic Diagnostics Act (GenDG), which mandates that health-related genetic tests can only be carried out under medical supervision to ensure the proper interpretation of results and informed decision-making. The law emphasizes genetic counseling and informed consent, protecting individuals from potential misuse or misunderstanding of their genetic data.
In France
The legal status of genetic testing in France is regulated under strict privacy and data protection laws, including the Bioethics Law. Direct-to-consumer (DTC) genetic tests, especially those for health-related purposes, are prohibited unless conducted with medical oversight to ensure informed consent and appropriate counseling. This is due to concerns about the potential misuse of genetic data and privacy violations. While health-related genetic testing is allowed within a medical context, tests for non-medical purposes, such as ancestry or personal traits, also face legal restrictions, particularly regarding consumer access.
In Russia
Russian law provides that the processing of special categories of personal data relating to race, nationality, political views, religious or philosophical beliefs, health status, intimate life is allowed if it is necessary in connection with the implementation of international agreements of the Russian Federation on readmission and is carried out in accordance with the legislation of the Russian Federation on citizenship of the Russian Federation. Information characterizing the physiological and biological characteristics of a person, on the basis of which it is possible to establish his identity (biometric personal data), can be processed without the consent of the subject of personal data in connection with the implementation of international agreements of the Russian Federation on readmission, administration of justice and execution of judicial acts, compulsory state fingerprinting registration, as well as in cases stipulated by the legislation of the Russian Federation on defense, security, anti-terrorism, transport security, anti-corruption, operational investigative activities, public service, as well as in cases stipulated by the criminal-executive legislation of Russia, the legislation of Russia on the procedure for leaving the Russian Federation and entering the Russian Federation, citizenship of the Russian Federation and notaries.
Within the framework of this program, it is also planned to include the peoples of neighboring countries, which are the main source of migration, into the genogeographic study on the basis of existing collections.
In UAE
By the end of 2021, the UAE Genome Project will be in full swing, as part of the National Innovation Strategy, establishing strategic partnerships with top medical research centers, and making sustainable investments in healthcare services. The project aims to prevent genetic diseases through the use of genetic sciences and innovative modern techniques related to profiling and genetic sequencing, in order to identify the genetic footprint and prevent the most prevalent diseases in the country, such as obesity, diabetes, hypertension, cancer, and asthma. It aims to achieve personalized treatment for each patient based on genetic factors. Additionally, a study by Khalifa University has identified, for the first time, four genetic markers associated with type 2 diabetes among UAE citizens.
In Israel
The Israeli Knesset passed the Genetic Information Law in 2000, becoming one of the first countries to establish a regulatory framework for the conducting of genetic testing and genetic counseling and for the handling and use identified genetic information. Under the law, genetic tests must be done in labs accredited by the Ministry of Health; however, genetic tests may be conducted outside Israel. The law also forbids discrimination for employment or insurance purposes based on genetic test results. Finally, the law takes a strict approach to genetic testing on minors, which is permitted only for the purpose of finding a genetic match with someone ill for the sake of medical treatment, or to see whether the minor carries a gene related to an illness that can be prevented or postponed.
Under the Genetic Information Law as of 2019, commercial DNA tests are not permitted to be sold directly to the public, but can be obtained with a court order, due to data privacy, reliability, and misinterpretation concerns.
Children and religion
Three to five percent of the funding available for the Human Genome Project was set aside to study the many social, ethical, and legal implications that will result from the better understanding of human heredity the rapid expansion of genetic risk assessment by genetic testing which would be facilitated by this project.
Pediatric genetic testing
The American Academy of Pediatrics (AAP) and the American College of Medical Genetics (ACMG) have provided new guidelines for the ethical issue of pediatric genetic testing and screening of children in the United States. Their guidelines state that performing pediatric genetic testing should be in the best interest of the child. AAP and ACMG recommend holding off on genetic testing for late-onset conditions until adulthood, unless diagnosing genetic disorders during childhood can reduce morbidity or mortality (e.g., to start early intervention). Testing asymptomatic children who are at risk of childhood onset conditions can also be warranted.
Both AAP and ACMG discourage the use of direct-to-consumer and home kit genetic tests because of concerns regarding the accuracy, interpretation and oversight of test content.
Guidelines also state that parents or guardians should be encouraged to inform their child of the results from the genetic test if the minor is of appropriate age. For ethical and legal reasons, health care providers should be cautious in providing minors with predictive genetic testing without the involvement of parents or guardians. Within the guidelines set by AAP and ACMG, health care providers have an obligation to inform parents or guardians on the implication of test results.
AAP and ACMG state that any type of predictive genetic testing should be offered with genetic counseling by clinical genetics, genetic counselors or health care providers.
Israel
In Israel, DNA testing is used to determine if people are eligible for immigration. The policy where "many Jews from the former Soviet Union (FSU) are asked to provide DNA confirmation of their Jewish heritage in the form of paternity tests in order to immigrate as Jews and become citizens under Israel's Law of Return" has generated controversy.
Costs and time
From the date that a sample is taken, results may take weeks to months, depending upon the complexity and extent of the tests being performed. Results for prenatal testing are usually available more quickly because time is an important consideration in making decisions about a pregnancy. Prior to the testing, the doctor or genetic counselor who is requesting a particular test can provide specific information about the cost and time frame associated with that test.
| Technology | Biotechnology | null |
310889 | https://en.wikipedia.org/wiki/Coproduct | Coproduct | In category theory, the coproduct, or categorical sum, is a construction which includes as examples the disjoint union of sets and of topological spaces, the free product of groups, and the direct sum of modules and vector spaces. The coproduct of a family of objects is essentially the "least specific" object to which each object in the family admits a morphism. It is the category-theoretic dual notion to the categorical product, which means the definition is the same as the product but with all arrows reversed. Despite this seemingly innocuous change in the name and notation, coproducts can be and typically are dramatically different from products within a given category.
Definition
Let be a category and let and be objects of An object is called the coproduct of and written or or sometimes simply if there exist morphisms and satisfying the following universal property: for any object and any morphisms and there exists a unique morphism such that and That is, the following diagram commutes:
The unique arrow making this diagram commute may be denoted or The morphisms and are called , although they need not be injections or even monic.
The definition of a coproduct can be extended to an arbitrary family of objects indexed by a set The coproduct of the family is an object together with a collection of morphisms such that, for any object and any collection of morphisms there exists a unique morphism such that That is, the following diagram commutes for each :
The coproduct of the family is often denoted or
Sometimes the morphism may be denoted to indicate its dependence on the individual s.
Examples
The coproduct in the category of sets is simply the disjoint union with the maps ij being the inclusion maps. Unlike direct products, coproducts in other categories are not all obviously based on the notion for sets, because unions don't behave well with respect to preserving operations (e.g. the union of two groups need not be a group), and so coproducts in different categories can be dramatically different from each other. For example, the coproduct in the category of groups, called the free product, is quite complicated. On the other hand, in the category of abelian groups (and equally for vector spaces), the coproduct, called the direct sum, consists of the elements of the direct product which have only finitely many nonzero terms. (It therefore coincides exactly with the direct product in the case of finitely many factors.)
Given a commutative ring R, the coproduct in the category of commutative R-algebras is the tensor product. In the category of (noncommutative) R-algebras, the coproduct is a quotient of the tensor algebra (see free product of associative algebras).
In the case of topological spaces, coproducts are disjoint unions with their disjoint union topologies. That is, it is a disjoint union of the underlying sets, and the open sets are sets open in each of the spaces, in a rather evident sense. In the category of pointed spaces, fundamental in homotopy theory, the coproduct is the wedge sum (which amounts to joining a collection of spaces with base points at a common base point).
The concept of disjoint union secretly underlies the above examples: the direct sum of abelian groups is the group generated by the "almost" disjoint union (disjoint union of all nonzero elements, together with a common zero), similarly for vector spaces: the space spanned by the "almost" disjoint union; the free product for groups is generated by the set of all letters from a similar "almost disjoint" union where no two elements from different sets are allowed to commute. This pattern holds for any variety in the sense of universal algebra.
The coproduct in the category of Banach spaces with short maps is the sum, which cannot be so easily conceptualized as an "almost disjoint" sum, but does have a unit ball almost-disjointly generated by the unit ball is the cofactors.
The coproduct of a poset category is the join operation.
Discussion
The coproduct construction given above is actually a special case of a colimit in category theory. The coproduct in a category can be defined as the colimit of any functor from a discrete category into . Not every family will have a coproduct in general, but if it does, then the coproduct is unique in a strong sense: if and are two coproducts of the family , then (by the definition of coproducts) there exists a unique isomorphism such that for each .
As with any universal property, the coproduct can be understood as a universal morphism. Let be the diagonal functor which assigns to each object the ordered pair and to each morphism the pair . Then the coproduct in is given by a universal morphism to the functor from the object in .
The coproduct indexed by the empty set (that is, an empty coproduct) is the same as an initial object in .
If is a set such that all coproducts for families indexed with exist, then it is possible to choose the products in a compatible fashion so that the coproduct turns into a functor . The coproduct of the family is then often denoted by
and the maps are known as the natural injections.
Letting denote the set of all morphisms from to in (that is, a hom-set in ), we have a natural isomorphism
given by the bijection which maps every tuple of morphisms
(a product in Set, the category of sets, which is the Cartesian product, so it is a tuple of morphisms) to the morphism
That this map is a surjection follows from the commutativity of the diagram: any morphism is the coproduct of the tuple
That it is an injection follows from the universal construction which stipulates the uniqueness of such maps. The naturality of the isomorphism is also a consequence of the diagram. Thus the contravariant hom-functor changes coproducts into products. Stated another way, the hom-functor, viewed as a functor from the opposite category to Set is continuous; it preserves limits (a coproduct in is a product in ).
If is a finite set, say , then the coproduct of objects is often denoted by . Suppose all finite coproducts exist in C, coproduct functors have been chosen as above, and 0 denotes the initial object of C corresponding to the empty coproduct. We then have natural isomorphisms
These properties are formally similar to those of a commutative monoid; a category with finite coproducts is an example of a symmetric monoidal category.
If the category has a zero object , then we have a unique morphism (since is terminal) and thus a morphism . Since is also initial, we have a canonical isomorphism as in the preceding paragraph. We thus have morphisms and , by which we infer a canonical morphism . This may be extended by induction to a canonical morphism from any finite coproduct to the corresponding product. This morphism need not in general be an isomorphism; in Grp it is a proper epimorphism while in Set* (the category of pointed sets) it is a proper monomorphism. In any preadditive category, this morphism is an isomorphism and the corresponding object is known as the biproduct. A category with all finite biproducts is known as a semiadditive category.
If all families of objects indexed by have coproducts in , then the coproduct comprises a functor . Note that, like the product, this functor is covariant.
| Mathematics | Category theory | null |
310898 | https://en.wikipedia.org/wiki/Croup | Croup | Croup ( ), also known as croupy cough, is a type of respiratory infection that is usually caused by a virus. The infection leads to swelling inside the trachea, which interferes with normal breathing and produces the classic symptoms of "barking/brassy" cough, inspiratory stridor and a hoarse voice. Fever and runny nose may also be present. These symptoms may be mild, moderate, or severe. Often it starts or is worse at night and normally lasts one to two days.
Croup can be caused by a number of viruses including parainfluenza and influenza virus. Rarely is it due to a bacterial infection. Croup is typically diagnosed based on signs and symptoms after potentially more severe causes, such as epiglottitis or an airway foreign body, have been ruled out. Further investigations, such as blood tests, X-rays and cultures, are usually not needed.
Many cases of croup are preventable by immunization for influenza and diphtheria. Most cases of croup are mild and the patient can be treated at home with supportive care. Croup is usually treated with a single dose of steroids by mouth. In more severe cases inhaled epinephrine may also be used. Hospitalization is required in one to five percent of cases.
Croup is a relatively common condition that affects about 15% of children at some point. It most commonly occurs between six months and five years of age but may rarely be seen in children as old as fifteen. It is slightly more common in males than females. It occurs most often in autumn. Before vaccination, croup was frequently caused by diphtheria and was often fatal. This cause is now very rare in the Western world due to the success of the diphtheria vaccine.
Signs and symptoms
Croup is characterized by a "barking" cough, stridor, hoarseness, and difficult breathing which usually worsens at night. The "barking" cough is often described as resembling the call of a sea lion. The stridor is worsened by agitation or crying, and if it can be heard at rest, it may indicate critical narrowing of the airways. As croup worsens, stridor may decrease considerably.
Other symptoms include fever, coryza (symptoms typical of the common cold), and indrawing of the chest wall–known as Hoover's sign. Drooling or a very sick appearance can indicate other medical conditions, such as epiglottitis or tracheitis.
Causes
Croup is usually deemed to be due to a viral infection. Others use the term more broadly, to include acute laryngotracheitis (laryngitis and tracheitis together), spasmodic croup, laryngeal diphtheria, bacterial tracheitis, laryngotracheobronchitis, and laryngotracheobronchopneumonitis. The first two conditions involve a viral infection and are generally milder with respect to symptomatology; the last four are due to bacterial infection and are usually of greater severity.
Viral
Viral croup or acute laryngotracheitis is most commonly caused by parainfluenza virus (a member of the paramyxovirus family), primarily types 1 and 2, in 75% of cases. Other viral causes include influenza A and B, measles, adenovirus and respiratory syncytial virus (RSV). Spasmodic croup is caused by the same group of viruses as acute laryngotracheitis, but lacks the usual signs of infection (such as fever, sore throat, and increased white blood cell count). Treatment, and response to treatment, are also similar.
Bacteria and cocci
Croup caused by a bacterial infection is rare. Bacterial croup may be divided into laryngeal diphtheria, bacterial tracheitis, laryngotracheobronchitis, and laryngotracheobronchopneumonitis. Laryngeal diphtheria is due to Corynebacterium diphtheriae while bacterial tracheitis, laryngotracheobronchitis, and laryngotracheobronchopneumonitis are usually due to a primary viral infection with secondary bacterial growth. The most common cocci implicated are Staphylococcus aureus and Streptococcus pneumoniae, while the most common bacteria are Haemophilus influenzae, and Moraxella catarrhalis.
Pathophysiology
The viral infection that causes croup leads to swelling of the larynx, trachea, and large bronchi due to infiltration of white blood cells (especially histiocytes, lymphocytes, plasma cells, and neutrophils). Swelling produces airway obstruction which, when significant, leads to dramatically increased work of breathing and the characteristic turbulent, noisy airflow known as stridor.
Diagnosis
Croup is typically diagnosed based on signs and symptoms. The first step is to exclude other obstructive conditions of the upper airway, especially epiglottitis, an airway foreign body, subglottic stenosis, angioedema, retropharyngeal abscess, and bacterial tracheitis.
A frontal X-ray of the neck is not routinely performed, but if it is done, it may show a characteristic narrowing of the trachea, called the steeple sign, because of the subglottic stenosis, which resembles a steeple in shape. The steeple sign is suggestive of the diagnosis, but is absent in half of cases.
Other investigations (such as blood tests and viral culture) are discouraged, as they may cause unnecessary agitation and thus worsen the stress on the compromised airway. While viral cultures, obtained via nasopharyngeal aspiration, can be used to confirm the exact cause, these are usually restricted to research settings. Bacterial infection should be considered if a person does not improve with standard treatment, at which point further investigations may be indicated.
Severity
The most commonly used system for classifying the severity of croup is the Westley score. It is primarily used for research purposes rather than in clinical practice. It is the sum of points assigned for five factors: level of consciousness, cyanosis, stridor, air entry, and retractions. The points given for each factor is listed in the adjacent table, and the final score ranges from 0 to 17.
A total score of ≤ 2 indicates mild croup. The characteristic barking cough and hoarseness may be present, but there is no stridor at rest.
A total score of 3–5 is classified as moderate croup. It presents with easily heard stridor, but with few other signs.
A total score of 6–11 is severe croup. It also presents with obvious stridor, but also features marked chest wall indrawing.
A total score of ≥ 12 indicates impending respiratory failure. The barking cough and stridor may no longer be prominent at this stage.
85% of children presenting to the emergency department have mild disease; severe croup is rare (<1%).
Prevention
Croup is contagious during the first few days of the infection. Basic hygiene including hand washing can prevent transmission. There are no vaccines that have been developed to prevent croup, however, many cases of croup have been prevented by immunization for influenza and diphtheria. At one time, croup referred to a diphtherial disease, but with vaccination, diphtheria is now rare in the developed world.
Treatment
Most children with croup have mild symptoms and supportive care at home is effective. For children with moderate to severe croup, treatment with corticosteroids and nebulized epinephrine may be suggested. Steroids are given routinely, with epinephrine used in severe cases. Children with oxygen saturation less than 92% should receive oxygen, and those with severe croup may be hospitalized for observation. In very rare severe cases of croup that result in respiratory failure, emergency intubation and ventilation may be required. With treatment, less than 0.2% of children require endotracheal intubation. Since croup is usually a viral disease, antibiotics are not used unless secondary bacterial infection is suspected. The use of cough medicines, which usually contain dextromethorphan or guaifenesin, are also discouraged.
Supportive care
Supportive care for children with croup includes resting and keeping the child hydrated. Infections that are mild are suggested to be treated at home. Croup is contagious so washing hands is important. Children with croup should generally be kept as calm as possible. Over the counter medications for pain and fever may be helpful to keep the child comfortable. There is some evidence that cool or warm mist may be helpful, however, the effectiveness of this approach is not clear. If the child is showing signs of distress while breathing (inspiratory stridor, working hard to breathe, blue (or blue-ish) coloured lips, or decrease in the level of alertness), immediate medical evaluation by a doctor is required.
Steroids
Corticosteroids, such as dexamethasone and budesonide, have been shown to improve outcomes in children with all severities of croup, however, the benefits may be delayed. Significant relief may be obtained as early as two hours after administration. While effective when given by injection, or by inhalation, giving the medication by mouth is preferred. A single dose is usually all that is required, and is generally considered to be quite safe. Dexamethasone at doses of 0.15, 0.3 and 0.6 mg/kg appear to be all equally effective.
Epinephrine
Moderate to severe croup (for example, in the case of severe stridor) may be improved temporarily with nebulized epinephrine. While epinephrine typically produces a reduction in croup severity within 10–30 minutes, the benefits are short-lived and last for only about 2 hours. If the condition remains improved for 2–4 hours after treatment and no other complications arise, the child is typically discharged from the hospital. Epinephrine treatment is associated with potential adverse effects (usually related to the dose of epinephrine) including tachycardia, arrhythmias, and hypertension.
Oxygen
More severe cases of croup may require treatment with oxygen. If oxygen is needed, "blow-by" administration (holding an oxygen source near the child's face) is recommended, as it causes less agitation than use of a mask.
Other
While other treatments for croup have been studied, none has sufficient evidence to support its use. There is tentative evidence that breathing heliox (a mixture of helium and oxygen) to decrease the work of breathing is useful in those with severe disease, however, there is uncertainty in the effectiveness and the potential adverse effects and/or side effects are not well known. In cases of possible secondary bacterial infection, the antibiotics vancomycin and cefotaxime are recommended. In severe cases associated with influenza A or B infections, the antiviral neuraminidase inhibitors may be administered.
Prognosis
Viral croup is usually a self-limiting disease, with half of cases resolving in a day and 80% of cases in two days. It can very rarely result in death from respiratory failure and/or cardiac arrest. Symptoms usually improve within two days, but may last for up to seven days. Other uncommon complications include bacterial tracheitis, pneumonia, and pulmonary edema.
Epidemiology
Croup affects about 15% of children, and usually presents between the ages of 6 months and 5–6 years. It accounts for about 5% of hospital admissions in this population. In rare cases, it may occur in children as young as 3 months and as old as 15 years. Males are affected 50% more frequently than are females, and there is an increased prevalence in autumn.
History
The word croup comes from the Early Modern English verb croup, meaning "to cry hoarsely." The noun describing the disease originated in southeastern Scotland and became widespread after Edinburgh physician Francis Home published the 1765 treatise An Inquiry into the Nature, Cause, and Cure of the Croup.
Diphtheritic croup has been known since the time of Homer's ancient Greece, and it was not until 1826 that viral croup was differentiated from croup due to diphtheria by Bretonneau. Viral croup was then called "faux-croup" by the French and often called "false croup" in English, as "croup" or "true croup" then most often referred to the disease caused by the diphtheria bacterium. False croup has also been known as pseudo croup or spasmodic croup. Croup due to diphtheria has become nearly unknown in affluent countries in modern times due to the advent of effective immunization.
One famous fatality of croup was Napoleon's designated heir, Napoléon Charles Bonaparte. His death in 1807 left Napoleon without an heir and contributed to his decision to divorce from his wife, the Empress Josephine de Beauharnais.
| Biology and health sciences | Viral diseases | Health |
310959 | https://en.wikipedia.org/wiki/Subcategory | Subcategory | In mathematics, specifically category theory, a subcategory of a category C is a category S whose objects are objects in C and whose morphisms are morphisms in C with the same identities and composition of morphisms. Intuitively, a subcategory of C is a category obtained from C by "removing" some of its objects and arrows.
Formal definition
Let C be a category. A subcategory S of C is given by
a subcollection of objects of C, denoted ob(S),
a subcollection of morphisms of C, denoted hom(S).
such that
for every X in ob(S), the identity morphism idX is in hom(S),
for every morphism f : X → Y in hom(S), both the source X and the target Y are in ob(S),
for every pair of morphisms f and g in hom(S) the composite f o g is in hom(S) whenever it is defined.
These conditions ensure that S is a category in its own right: its collection of objects is ob(S), its collection of morphisms is hom(S), and its identities and composition are as in C. There is an obvious faithful functor I : S → C, called the inclusion functor which takes objects and morphisms to themselves.
Let S be a subcategory of a category C. We say that S is a full subcategory of C if for each pair of objects X and Y of S,
A full subcategory is one that includes all morphisms in C between objects of S. For any collection of objects A in C, there is a unique full subcategory of C whose objects are those in A.
Examples
The category of finite sets forms a full subcategory of the category of sets.
The category whose objects are sets and whose morphisms are bijections forms a non-full subcategory of the category of sets.
The category of abelian groups forms a full subcategory of the category of groups.
The category of rings (whose morphisms are unit-preserving ring homomorphisms) forms a non-full subcategory of the category of rngs.
For a field K, the category of K-vector spaces forms a full subcategory of the category of (left or right) K-modules.
Embeddings
Given a subcategory S of C, the inclusion functor I : S → C is both a faithful functor and injective on objects. It is full if and only if S is a full subcategory.
Some authors define an embedding to be a full and faithful functor. Such a functor is necessarily injective on objects up to isomorphism. For instance, the Yoneda embedding is an embedding in this sense.
Some authors define an embedding to be a full and faithful functor that is injective on objects.
Other authors define a functor to be an embedding if it is
faithful and
injective on objects.
Equivalently, F is an embedding if it is injective on morphisms. A functor F is then called a full embedding if it is a full functor and an embedding.
With the definitions of the previous paragraph, for any (full) embedding F : B → C the image of F is a (full) subcategory S of C, and F induces an isomorphism of categories between B and S. If F is not injective on objects then the image of F is equivalent to B.
In some categories, one can also speak of morphisms of the category being embeddings.
Types of subcategories
A subcategory S of C is said to be isomorphism-closed or replete if every isomorphism k : X → Y in C such that Y is in S also belongs to S. An isomorphism-closed full subcategory is said to be strictly full.
A subcategory of C is wide or lluf (a term first posed by Peter Freyd) if it contains all the objects of C. A wide subcategory is typically not full: the only wide full subcategory of a category is that category itself.
A Serre subcategory is a non-empty full subcategory S of an abelian category C such that for all short exact sequences
in C, M belongs to S if and only if both and do. This notion arises from Serre's C-theory.
| Mathematics | Category theory | null |
311282 | https://en.wikipedia.org/wiki/Gunship | Gunship | A gunship is a military aircraft armed with heavy aircraft guns, primarily intended for attacking ground targets either as airstrike or as close air support.
In modern usage the term "gunship" refers to fixed-wing aircraft having laterally-mounted heavy armaments (i.e. firing to the side) to attack ground or sea targets. These gunships are configured to circle the target instead of performing strafing runs. Such aircraft have their armament on one side harmonized to fire at the apex of an imaginary cone formed by the aircraft and the ground when performing a pylon turn (banking turn). The term "gunship" originated in the mid-19th century as a synonym for gunboat and also referred to the heavily armed ironclad steamships used during the American Civil War.
The term helicopter gunship is commonly used to describe armed helicopters.
World War II aviation
Bomber escort
During 1942 and 1943, the lack of a usable escort fighter for the United States Army Air Forces in the European Theatre of Operations led to experiments in dramatically increasing the armament of a standard Boeing B-17F Flying Fortress, and later a single Consolidated B-24D Liberator, to each have 14 to 16 Browning AN/M2 .50 cal machine guns as the Boeing YB-40 Flying Fortress and Consolidated XB-41 Liberator respectively. These were to accompany regular heavy bomber formations over occupied Europe on strategic bombing raids for long-range escort duties as "flying destroyer gunships". The YB-40 was sometimes described as a gunship, and a small 25-aircraft batch of the B-17-derived gunships were built, with a dozen of these deployed to Europe; the XB-41 had problems with stability and did not progress.
Attack aircraft
During World War II, the urgent need for hard-hitting attack aircraft led to the development of the heavily armed gunship versions of the North American B-25 Mitchell. For use against shipping in the Pacific 405 B-25Gs were armed with a 75 mm (2.95 in) M4 cannon and a thousand B-25Hs followed. The H models, delivered from August 1943, moved the dorsal turret forward to just behind the cockpit and were armed with the lighter 75 mm T13E1 cannon. The B-25J variant removed the 75 mm gun but carried a total of eighteen 0.50 cal (12.7 mm) AN/M2 Browning machine guns, more than any other contemporary American aircraft: eight in the nose, four in under-cockpit conformal flank-mount gun pod packages, two in the dorsal turret, one each in the pair of waist positions, and a pair in the tail, giving a maximum of fourteen guns firing forward in strafing runs. Later the B-25J was armed with eight 5 in (130 mm) high velocity aircraft rockets (HVARs).
The British also made large numbers of twin-engined fighter bombers. The de Havilland Mosquito FB.VI had a fixed armament of four Hispano Mk.II cannon and four Browning machine guns, together with up to of bombs in the bomb bay and on racks housed in streamlined fairings under each wing, or up to eight "60lb" RP-3 rockets. De Havilland also produced seventeen Mosquito FB Mk XVIIIs armed with a QF 6-pdr anti-tank gun with autoloader, which were used against German ships and U-boats.
The Germans also made a sizable number of heavy fighter types (Zerstörer—"destroyer") armed with heavy guns (Bordkanone). Dedicated "tankbuster" aircraft such as the Ju 87Gs (Kanonenvogel) were armed with two BK 37 mm autocannon in underwing gun pods. The Ju 88P gunships were armed with guns, and were used as tankbusters and as bomber destroyers. The Hs 129 could carry a MK 101 cannon or MK 103 cannon in a conformally mounted gun pod (B-2/R-2). The Me 410 Hornisse were armed with the same BK 50 mm autocannon as the Ju 88P-4, but were only used as bomber destroyers. None of the German twin-engine heavy fighters types were produced or converted in large numbers.
Post–World War II aviation
Fixed-wing aircraft
In the more modern, post-World War II fixed-wing aircraft category, a gunship is an aircraft having laterally-mounted heavy armaments (i.e. firing to the side) to attack ground or sea targets. These gunships were configured to circle the target instead of performing strafing runs. Such aircraft have their armament on one side harmonized to fire at the apex of an imaginary cone formed by the aircraft and the ground when performing a pylon turn (banking turn).
The Douglas AC-47 Spooky was the first notable modern gunship. In 1964, during the Vietnam War, the popular Douglas C-47 Skytrain transport was successfully modified into a gunship by the United States Air Force with three side-firing Miniguns for circling attacks. At the time the aircraft was known as a "Dragonship", "Puff, the Magic Dragon" or "Spooky" (officially designated FC-47, later corrected to AC-47). Its three miniguns could selectively fire either 50 or 100 rounds per second. Cruising in an overhead left-hand orbit at air speed at an altitude of , the gunship could put a bullet or glowing red tracer (every fifth round) into every square yard of a football field–sized target in potentially less than 10 seconds. And, as long as its 45-flare and 24,000-round basic load of ammunition held out, it could do this intermittently while loitering over the target for hours.
The lesser known Fairchild AC-119G Shadow and AC-119K Stingers were twin-engine piston-powered gunships developed by the United States during the Vietnam War. Armed with four 7.62 mm GAU-2/A Miniguns (and two 20 mm (0.787 in) M61 Vulcan six-barrel rotary cannons in the AC-119K version), they replaced the Douglas AC-47 Spooky and operated alongside the early versions of the AC-130 Spectre gunship.
It was the later and larger Lockheed AC-130 Gunship II that became the modern, post–World War II origin of the term gunship in military aviation. These heavily armed aircraft used a variety of weapon systems, including GAU-2/A Miniguns, M61 Vulcan six-barrel rotary cannons, GAU-12/U Equalizer five-barreled rotary cannons, Mk44 Bushmaster II chain guns, 40 mm (1.58 in) L/60 Bofors autocannons, and M102 howitzers. The Douglas AC-47 Spooky, the Fairchild AC-119, and the AC-130 Spectre/Spooky, were vulnerable, and meant to operate only after achieving air superiority.
Smaller gunship designs such as the Fairchild AU-23 Peacemaker and the Helio AU-24 Stallion were also designed by the United States during the Vietnam War. These aircraft were meant to be cheap and easy to fly and maintain, and were to be given to friendly governments in Southeast Asia to assist with counter-insurgency operations, eventually seeing service with the Khmer National Air Force, Royal Thai Air Force, and Republic of Vietnam Air Force as well as limited use by the United States Air Force.
Renewed interest in the concept of gunships has resulted in the development of a gunship variant of the Alenia C-27J Spartan. Although the United States Air Force decided not to procure the AC-27J, other nations including Italy have chosen the aircraft for introduction. Additionally, in 2013 the US Air Force Special Operations Command reportedly tested a gunship version of the C-145A Skytruck armed with a GAU-18 twin-mount machine gun system.
Later Air Forces in the middle east have begun to experiment with smaller gunships than the AC-130 with the Jordan Air Force converting 2 AC-235 and a single AC-295 into Gunships. These are armed with ATK’s side-mounted M230 chain guns and various munitions ( rockets, hellfire missiles and bombs) mounted on to wing pylons.
Other smaller modern gunships include the AC-27J Stinger II and the MC-27J produced by Alenia Aeronautica in Italy.
Helicopter gunships
Early helicopter gunships also operated in the side-firing configuration, with an early example being the Aérospatiale Alouette III. During the Overseas wars in Africa in the 1960s, the Portuguese Air Force experimented with the installation of M2 Browning machine guns in a side-firing twin-mounting configuration in some of its Alouette III helicopters. Later, the machine guns were replaced by a MG 151 20 mm cannon in a single mounting. These helicopters were known in Portuguese service as "helicanhões (heli-cannons) and were used in the escort of unarmed transport helicopters in air assault operations and in the fire support to the troops in the ground. The South African and Rhodesian air forces later used armed Alouette III in similar configurations as the Portuguese, respectively in the South African Border and Rhodesian Bush wars.
During the Algerian War, the French operated Sikorsky H-34 "Pirate" armed with a German 20mm MG151 cannon and two .50 inch machine guns. During the early days of the Vietnam War, USMC H-34s were among the first helicopter gunships in theater, fitted with the Temporary Kit-1 (TK-1), comprising two M60C machine guns and two 19-shot 2.75 inch rocket pods. The operations were met with mixed enthusiasm, and the armed H-34s, known as "Stingers", were quickly phased out. The TK-1 kit would form the basis of the TK-2 kit used later on the UH-1E helicopters of the USMC.
The U.S. Army also experimented with H-34 gunships armed with M2 .50 caliber machine-guns and 2.75-inch rockets. In September 1971, a CH-34 was armed with two M2 .50 caliber machine guns, four M1919 .30 caliber machine guns, forty 2.75-inch rockets, two 5-inch high velocity aerial rockets (HVAR), plus two additional .30 caliber machine guns in the left side aft windows and one .50 caliber machine gun in the right side cargo door. The result was the world's most heavily armed helicopter at the time.
Also, during the Vietnam War, the ubiquitous Bell UH-1 Iroquois helicopters were modified into gunships by mounting the U.S. Helicopter Armament Subsystems—these were forward-firing weapons, such as machine guns, rockets, and autocannons, that began to appear in 1962–1963. Helicopters can use a variety of combat maneuvers to approach a target. In their case, the term gunship is synonymous with heavily armed helicopter. Specifically, dedicated attack helicopters such as the Bell AH-1 Cobra also fit this meaning. In any case, the gunship armaments include machine guns, rockets, and missiles.
The Soviet Mil Mi-24 (NATO code name: Hind) is a large, heavily armed and armored helicopter gunship and troop transport. It was introduced in the 1970s and operated by the pre-1991 Soviet Air Force and its successors post-1991, and more than 30 other nations. It was heavily armed with a reinforced fuselage, designed to withstand .50 caliber (12.7 mm) machine gun fire. Its armored cockpits and titanium rotor head are able to withstand 20 mm cannon hits.
Examples
Fixed-wing aircraft
Basler BT-67
Douglas AC-47
Fairchild AU-23 Peacemaker
Fairchild AC-119
Lockheed AC-130
Helio AU-24 Stallion
L3Harris OA-1K Sky Warden
Airbus AC-235
Airbus AC-295
Helicopters
Aérospatiale SA319 Alouette III
Aérospatiale SA 330 Puma
Boeing ACH-47 Chinook
Bell UH-1B/C/M
Mil Mi-24
HAL Rudra
HAL Lancer
HAL Prachand
Sikorsky MH-60L DAP
Z-9WA
| Technology | Military aviation | null |
28730822 | https://en.wikipedia.org/wiki/Algebraic%20number%20field | Algebraic number field | In mathematics, an algebraic number field (or simply number field) is an extension field of the field of rational numbers such that the field extension has finite degree (and hence is an algebraic field extension).
Thus is a field that contains and has finite dimension when considered as a vector space over
The study of algebraic number fields, that is, of algebraic extensions of the field of rational numbers, is the central topic of algebraic number theory. This study reveals hidden structures behind the rational numbers, by using algebraic methods.
Definition
Prerequisites
The notion of algebraic number field relies on the concept of a field. A field consists of a set of elements together with two operations, namely addition, and multiplication, and some distributivity assumptions. These operations make the field into an abelian group under addition, and they make the nonzero elements of the field into another abelian group under multiplication. A prominent example of a field is the field of rational numbers, commonly denoted together with its usual operations of addition and multiplication.
Another notion needed to define algebraic number fields is vector spaces. To the extent needed here, vector spaces can be thought of as consisting of sequences (or tuples)
whose entries are elements of a fixed field, such as the field Any two such sequences can be added by adding the corresponding entries. Furthermore, all members of any sequence can be multiplied by a single element c of the fixed field. These two operations known as vector addition and scalar multiplication satisfy a number of properties that serve to define vector spaces abstractly. Vector spaces are allowed to be "infinite-dimensional", that is to say that the sequences constituting the vector spaces may be of infinite length. If, however, the vector space consists of finite sequences
the vector space is said to be of finite dimension, .
Definition
An algebraic number field (or simply number field) is a finite-degree field extension of the field of rational numbers. Here degree means the dimension of the field as a vector space over
Examples
The smallest and most basic number field is the field of rational numbers. Many properties of general number fields are modeled after the properties of At the same time, many other properties of algebraic number fields are substantially different from the properties of rational numbers—one notable example is that the ring of algebraic integers of a number field is not a principal ideal domain, in general.
The Gaussian rationals, denoted (read as " adjoined "), form the first (historically) non-trivial example of a number field. Its elements are elements of the form where both a and b are rational numbers and i is the imaginary unit. Such expressions may be added, subtracted, and multiplied according to the usual rules of arithmetic and then simplified using the identity Explicitly, for real numbers : Non-zero Gaussian rational numbers are invertible, which can be seen from the identity It follows that the Gaussian rationals form a number field that is two-dimensional as a vector space over
More generally, for any square-free integer the quadratic field is a number field obtained by adjoining the square root of to the field of rational numbers. Arithmetic operations in this field are defined in analogy with the case of Gaussian rational numbers,
The cyclotomic field where , is a number field obtained from by adjoining a primitive th root of unity . This field contains all complex th roots of unity and its dimension over is equal to , where is the Euler totient function.
Non-examples
The real numbers, and the complex numbers, are fields that have infinite dimension as -vector spaces; hence, they are not number fields. This follows from the uncountability of and as sets, whereas every number field is necessarily countable.
The set of ordered pairs of rational numbers, with the entry-wise addition and multiplication is a two-dimensional commutative algebra over However, it is not a field, since it has zero divisors:
Algebraicity, and ring of integers
Generally, in abstract algebra, a field extension is algebraic if every element of the bigger field is the zero of a (nonzero) polynomial with coefficients in
Every field extension of finite degree is algebraic. (Proof: for in simply consider – we get a linear dependence, i.e. a polynomial that is a root of.) In particular this applies to algebraic number fields, so any element of an algebraic number field can be written as a zero of a polynomial with rational coefficients. Therefore, elements of are also referred to as algebraic numbers. Given a polynomial such that , it can be arranged such that the leading coefficient is one, by dividing all coefficients by it, if necessary. A polynomial with this property is known as a monic polynomial. In general it will have rational coefficients.
If, however, the monic polynomial's coefficients are actually all integers, is called an algebraic integer.
Any (usual) integer is an algebraic integer, as it is the zero of the linear monic polynomial:
.
It can be shown that any algebraic integer that is also a rational number must actually be an integer, hence the name "algebraic integer". Again using abstract algebra, specifically the notion of a finitely generated module, it can be shown that the sum and the product of any two algebraic integers is still an algebraic integer. It follows that the algebraic integers in form a ring denoted called the ring of integers of It is a subring of (that is, a ring contained in) A field contains no zero divisors and this property is inherited by any subring, so the ring of integers of is an integral domain. The field is the field of fractions of the integral domain This way one can get back and forth between the algebraic number field and its ring of integers Rings of algebraic integers have three distinctive properties: firstly, is an integral domain that is integrally closed in its field of fractions Secondly, is a Noetherian ring. Finally, every nonzero prime ideal of is maximal or, equivalently, the Krull dimension of this ring is one. An abstract commutative ring with these three properties is called a Dedekind ring (or Dedekind domain), in honor of Richard Dedekind, who undertook a deep study of rings of algebraic integers.
Unique factorization
For general Dedekind rings, in particular rings of integers, there is a unique factorization of ideals into a product of prime ideals. For example, the ideal in the ring of quadratic integers factors into prime ideals as
However, unlike as the ring of integers of the ring of integers of a proper extension of need not admit unique factorization of numbers into a product of prime numbers or, more precisely, prime elements. This happens already for quadratic integers, for example in the uniqueness of the factorization fails:
Using the norm it can be shown that these two factorization are actually inequivalent in the sense that the factors do not just differ by a unit in Euclidean domains are unique factorization domains; for example the ring of Gaussian integers, and the ring of Eisenstein integers, where is a cube root of unity (unequal to 1), have this property.
Analytic objects: ζ-functions, L-functions, and class number formula
The failure of unique factorization is measured by the class number, commonly denoted h, the cardinality of the so-called ideal class group. This group is always finite. The ring of integers possesses unique factorization if and only if it is a principal ring or, equivalently, if has class number 1. Given a number field, the class number is often difficult to compute. The class number problem, going back to Gauss, is concerned with the existence of imaginary quadratic number fields (i.e., ) with prescribed class number. The class number formula relates h to other fundamental invariants of It involves the Dedekind zeta function , a function in a complex variable , defined by
(The product is over all prime ideals of denotes the norm of the prime ideal or, equivalently, the (finite) number of elements in the residue field The infinite product converges only for Re(s) > 1; in general analytic continuation and the functional equation for the zeta-function are needed to define the function for all s).
The Dedekind zeta-function generalizes the Riemann zeta-function in that ζ(s) = ζ(s).
The class number formula states that ζ(s) has a simple pole at s = 1 and at this point the residue is given by
Here r1 and r2 classically denote the number of real embeddings and pairs of complex embeddings of respectively. Moreover, Reg is the regulator of w the number of roots of unity in and D is the discriminant of
Dirichlet L-functions are a more refined variant of . Both types of functions encode the arithmetic behavior of and , respectively. For example, Dirichlet's theorem asserts that in any arithmetic progression
with coprime and , there are infinitely many prime numbers. This theorem is implied by the fact that the Dirichlet -function is nonzero at . Using much more advanced techniques including algebraic K-theory and Tamagawa measures, modern number theory deals with a description, if largely conjectural (see Tamagawa number conjecture), of values of more general L-functions.
Bases for number fields
Integral basis
An integral basis for a number field of degree is a set
B = {b1, …, bn}
of n algebraic integers in such that every element of the ring of integers of can be written uniquely as a Z-linear combination of elements of B; that is, for any x in we have
x = m1b1 + ⋯ + mnbn,
where the mi are (ordinary) integers. It is then also the case that any element of can be written uniquely as
m1b1 + ⋯ + mnbn,
where now the mi are rational numbers. The algebraic integers of are then precisely those elements of where the mi are all integers.
Working locally and using tools such as the Frobenius map, it is always possible to explicitly compute such a basis, and it is now standard for computer algebra systems to have built-in programs to do this.
Power basis
Let be a number field of degree Among all possible bases of (seen as a -vector space), there are particular ones known as power bases, that are bases of the form
for some element By the primitive element theorem, there exists such an , called a primitive element. If can be chosen in and such that is a basis of as a free Z-module, then is called a power integral basis, and the field is called a monogenic field. An example of a number field that is not monogenic was first given by Dedekind. His example is the field obtained by adjoining a root of the polynomial
Regular representation, trace and discriminant
Recall that any field extension has a unique -vector space structure. Using the multiplication in , an element of the field over the base field may be represented by matrices
by requiring
Here is a fixed basis for , viewed as a -vector space. The rational numbers are uniquely determined by and the choice of a basis since any element of can be uniquely represented as a linear combination of the basis elements. This way of associating a matrix to any element of the field is called the regular representation. The square matrix represents the effect of multiplication by in the given basis. It follows that if the element of is represented by a matrix , then the product is represented by the matrix product . Invariants of matrices, such as the trace, determinant, and characteristic polynomial, depend solely on the field element and not on the basis. In particular, the trace of the matrix is called the trace of the field element and denoted , and the determinant is called the norm of x and denoted .
Now this can be generalized slightly by instead considering a field extension and giving an -basis for . Then, there is an associated matrix , which has trace and norm defined as the trace and determinant of the matrix .
Example
Consider the field extension where . Then, we have a -basis given by since any can be expressed as some -linear combination Then, we can take some where and compute . Writing this out gives
We can find the matrix by writing out the associated matrix equation giving showing
We can then compute the trace and determinant with relative ease, giving the trace and norm.
Properties
By definition, standard properties of traces and determinants of matrices carry over to Tr and N: Tr(x) is a linear function of x, as expressed by , , and the norm is a multiplicative homogeneous function of degree n: , . Here λ is a rational number, and x, y are any two elements of
The trace form derived is a bilinear form defined by means of the trace, as
by . The integral trace form, an integer-valued symmetric matrix is defined as , where b1, ..., bn is an integral basis for The discriminant of is defined as det(t). It is an integer, and is an invariant property of the field , not depending on the choice of integral basis.
The matrix associated to an element x of can also be used to give other, equivalent descriptions of algebraic integers. An element x of is an algebraic integer if and only if the characteristic polynomial pA of the matrix A associated to x is a monic polynomial with integer coefficients. Suppose that the matrix A that represents an element x has integer entries in some basis e. By the Cayley–Hamilton theorem, pA(A) = 0, and it follows that pA(x) = 0, so that x is an algebraic integer. Conversely, if x is an element of that is a root of a monic polynomial with integer coefficients then the same property holds for the corresponding matrix A. In this case it can be proven that A is an integer matrix in a suitable basis of The property of being an algebraic integer is defined in a way that is independent of a choice of a basis in
Example with integral basis
Consider , where x satisfies . Then an integral basis is [1, x, 1/2(x2 + 1)], and the corresponding integral trace form is
The "3" in the upper left hand corner of this matrix is the trace of the matrix of the map defined by the first basis element (1) in the regular representation of on This basis element induces the identity map on the 3-dimensional vector space, The trace of the matrix of the identity map on a 3-dimensional vector space is 3.
The determinant of this is , the field discriminant; in comparison the root discriminant, or discriminant of the polynomial, is .
Places
Mathematicians of the nineteenth century assumed that algebraic numbers were a type of complex number. This situation changed with the discovery of p-adic numbers by Hensel in 1897; and now it is standard to consider all of the various possible embeddings of a number field into its various topological completions at once.
A place of a number field is an equivalence class of absolute values on pg 9. Essentially, an absolute value is a notion to measure the size of elements of Two such absolute values are considered equivalent if they give rise to the same notion of smallness (or proximity). The equivalence relation between absolute values is given by some such thatmeaning we take the value of the norm to the -th power.
In general, the types of places fall into three regimes. Firstly (and mostly irrelevant), the trivial absolute value | |0, which takes the value on all non-zero The second and third classes are Archimedean places and non-Archimedean (or ultrametric) places. The completion of with respect to a place is given in both cases by taking Cauchy sequences in and dividing out null sequences, that is, sequences such that tends to zero when tends to infinity. This can be shown to be a field again, the so-called completion of at the given place denoted
For the following non-trivial norms occur (Ostrowski's theorem): the (usual) absolute value, sometimes denoted , which gives rise to the complete topological field of the real numbers On the other hand, for any prime number , the p-adic absolute value is defined by
|q|p = p−n, where q = pn a/b and a and b are integers not divisible by p.
It is used to construct the -adic numbers In contrast to the usual absolute value, the p-adic absolute value gets smaller when q is multiplied by p, leading to quite different behavior of as compared to
Note the general situation typically considered is taking a number field and considering a prime ideal for its associated ring of algebraic numbers Then, there will be a unique place called a non-Archimedean place. In addition, for every embedding there will be a place called an Archimedean place, denoted This statement is a theorem also called Ostrowski's theorem.
Examples
The field for where is a fixed 6th root of unity, provides a rich example for constructing explicit real and complex Archimedean embeddings, and non-Archimedean embeddings as wellpg 15-16.
Archimedean places
Here we use the standard notation and for the number of real and complex embeddings used, respectively (see below).
Calculating the archimedean places of a number field is done as follows: let be a primitive element of , with minimal polynomial (over ). Over , will generally no longer be irreducible, but its irreducible (real) factors are either of degree one or two. Since there are no repeated roots, there are no repeated factors. The roots of factors of degree one are necessarily real, and replacing by gives an embedding of into ; the number of such embeddings is equal to the number of real roots of Restricting the standard absolute value on to gives an archimedean absolute value on ; such an absolute value is also referred to as a real place of On the other hand, the roots of factors of degree two are pairs of conjugate complex numbers, which allows for two conjugate embeddings into Either one of this pair of embeddings can be used to define an absolute value on , which is the same for both embeddings since they are conjugate. This absolute value is called a complex place of
If all roots of above are real (respectively, complex) or, equivalently, any possible embedding is actually forced to be inside (resp. is called totally real (resp. totally complex).
Non-Archimedean or ultrametric places
To find the non-Archimedean places, let again and be as above. In splits in factors of various degrees, none of which are repeated, and the degrees of which add up to the degree of For each of these -adically irreducible factors we may suppose that satisfies and obtain an embedding of into an algebraic extension of finite degree over Such a local field behaves in many ways like a number field, and the -adic numbers may similarly play the role of the rationals; in particular, we can define the norm and trace in exactly the same way, now giving functions mapping to By using this -adic norm map for the place , we may define an absolute value corresponding to a given -adically irreducible factor of degree bySuch an absolute value is called an ultrametric, non-Archimedean or -adic place of
For any ultrametric place v we have that |x|v ≤ 1 for any x in since the minimal polynomial for x has integer factors, and hence its p-adic factorization has factors in Zp. Consequently, the norm term (constant term) for each factor is a p-adic integer, and one of these is the integer used for defining the absolute value for v.
Prime ideals in OK
For an ultrametric place v, the subset of defined by |x|v < 1 is an ideal of This relies on the ultrametricity of v: given x and y in then
|x + y|v ≤ max (|x|v, |y|v) < 1.
Actually, is even a prime ideal.
Conversely, given a prime ideal of a discrete valuation can be defined by setting where n is the biggest integer such that the n-fold power of the ideal. This valuation can be turned into an ultrametric place. Under this correspondence, (equivalence classes) of ultrametric places of correspond to prime ideals of For this gives back Ostrowski's theorem: any prime ideal in Z (which is necessarily by a single prime number) corresponds to a non-Archimedean place and vice versa. However, for more general number fields, the situation becomes more involved, as will be explained below.
Yet another, equivalent way of describing ultrametric places is by means of localizations of Given an ultrametric place on a number field the corresponding localization is the subring of of all elements such that | x |v ≤ 1. By the ultrametric property is a ring. Moreover, it contains For every element x of at least one of x or x−1 is contained in Actually, since K×/T× can be shown to be isomorphic to the integers, is a discrete valuation ring, in particular a local ring. Actually, is just the localization of at the prime ideal so Conversely, is the maximal ideal of
Altogether, there is a three-way equivalence between ultrametric absolute values, prime ideals, and localizations on a number field.
Lying over theorem and places
Some of the basic theorems in algebraic number theory are the going up and going down theorems, which describe the behavior of some prime ideal when it is extended as an ideal in for some field extension We say that an ideal lies over if Then, one incarnation of the theorem states a prime ideal in lies over hence there is always a surjective mapinduced from the inclusion Since there exists a correspondence between places and prime ideals, this means we can find places dividing a place that is induced from a field extension. That is, if is a place of then there are places of that divide in the sense that their induced prime ideals divide the induced prime ideal of in
In fact, this observation is usefulpg 13 while looking at the base change of an algebraic field extension of to one of its completions If we writeand write for the induced element of we get a decomposition of Explicitly, this decomposition isfurthermore, the induced polynomial decomposes asbecause of Hensel's lemmapg 129-131; henceMoreover, there are embeddingswhere is a root of giving ; hence we could writeas subsets of (which is the completion of the algebraic closure of
Ramification
Ramification, generally speaking, describes a geometric phenomenon that can occur with finite-to-one maps (that is, maps such that the preimages of all points y in Y consist only of finitely many points): the cardinality of the fibers f−1(y) will generally have the same number of points, but it occurs that, in special points y, this number drops. For example, the map
has n points in each fiber over t, namely the n (complex) roots of t, except in t = 0, where the fiber consists of only one element, z = 0. One says that the map is "ramified" in zero. This is an example of a branched covering of Riemann surfaces. This intuition also serves to define ramification in algebraic number theory. Given a (necessarily finite) extension of number fields , a prime ideal p of generates the ideal pOK of This ideal may or may not be a prime ideal, but, according to the Lasker–Noether theorem (see above), always is given by
pO = q1e1 q2e2 ⋯ qmem
with uniquely determined prime ideals qi of and numbers (called ramification indices) ei. Whenever one ramification index is bigger than one, the prime p is said to ramify in
The connection between this definition and the geometric situation is delivered by the map of spectra of rings In fact, unramified morphisms of schemes in algebraic geometry are a direct generalization of unramified extensions of number fields.
Ramification is a purely local property, i.e., depends only on the completions around the primes p and qi. The inertia group measures the difference between the local Galois groups at some place and the Galois groups of the involved finite residue fields.
An example
The following example illustrates the notions introduced above. In order to compute the ramification index of where
f(x) = x3 − x − 1 = 0,
at 23, it suffices to consider the field extension Up to 529 = 232 (i.e., modulo 529) f can be factored as
f(x) = (x + 181)(x2 − 181x − 38) = gh.
Substituting in the first factor g modulo 529 yields y + 191, so the valuation | y |g for y given by g is | −191 |23 = 1. On the other hand, the same substitution in h yields Since 161 = 7 × 23,
Since possible values for the absolute value of the place defined by the factor h are not confined to integer powers of 23, but instead are integer powers of the square root of 23, the ramification index of the field extension at 23 is two.
The valuations of any element of can be computed in this way using resultants. If, for example y = x2 − x − 1, using the resultant to eliminate x between this relationship and f = x3 − x − 1 = 0 gives . If instead we eliminate with respect to the factors g and h of f, we obtain the corresponding factors for the polynomial for y, and then the 23-adic valuation applied to the constant (norm) term allows us to compute the valuations of y for g and h (which are both 1 in this instance.)
Dedekind discriminant theorem
Much of the significance of the discriminant lies in the fact that ramified ultrametric places are all places obtained from factorizations in where p divides the discriminant. This is even true of the polynomial discriminant; however the converse is also true, that if a prime p divides the discriminant, then there is a p-place that ramifies. For this converse the field discriminant is needed. This is the Dedekind discriminant theorem. In the example above, the discriminant of the number field with x3 − x − 1 = 0 is −23, and as we have seen the 23-adic place ramifies. The Dedekind discriminant tells us it is the only ultrametric place that does. The other ramified place comes from the absolute value on the complex embedding of .
Galois groups and Galois cohomology
Generally in abstract algebra, field extensions K / L can be studied by examining the Galois group Gal(K / L), consisting of field automorphisms of leaving elementwise fixed. As an example, the Galois group of the cyclotomic field extension of degree n (see above) is given by (Z/nZ)×, the group of invertible elements in Z/nZ. This is the first stepstone into Iwasawa theory.
In order to include all possible extensions having certain properties, the Galois group concept is commonly applied to the (infinite) field extension / K of the algebraic closure, leading to the absolute Galois group G := Gal( / K) or just Gal(K), and to the extension . The fundamental theorem of Galois theory links fields in between and its algebraic closure and closed subgroups of Gal(K). For example, the abelianization (the biggest abelian quotient) Gab of G corresponds to a field referred to as the maximal abelian extension Kab (called so since any further extension is not abelian, i.e., does not have an abelian Galois group). By the Kronecker–Weber theorem, the maximal abelian extension of is the extension generated by all roots of unity. For more general number fields, class field theory, specifically the Artin reciprocity law gives an answer by describing Gab in terms of the idele class group. Also notable is the Hilbert class field, the maximal abelian unramified field extension of . It can be shown to be finite over , its Galois group over is isomorphic to the class group of , in particular its degree equals the class number h of (see above).
In certain situations, the Galois group acts on other mathematical objects, for example a group. Such a group is then also referred to as a Galois module. This enables the use of group cohomology for the Galois group Gal(K), also known as Galois cohomology, which in the first place measures the failure of exactness of taking Gal(K)-invariants, but offers deeper insights (and questions) as well. For example, the Galois group G of a field extension L / K acts on L×, the nonzero elements of L. This Galois module plays a significant role in many arithmetic dualities, such as Poitou-Tate duality. The Brauer group of originally conceived to classify division algebras over , can be recast as a cohomology group, namely H2(Gal (K, ×)).
Local-global principle
Generally speaking, the term "local to global" refers to the idea that a global problem is first done at a local level, which tends to simplify the questions. Then, of course, the information gained in the local analysis has to be put together to get back to some global statement. For example, the notion of sheaves reifies that idea in topology and geometry.
Local and global fields
Number fields share a great deal of similarity with another class of fields much used in algebraic geometry known as function fields of algebraic curves over finite fields. An example is Kp(T). They are similar in many respects, for example in that number rings are one-dimensional regular rings, as are the coordinate rings (the quotient fields of which are the function fields in question) of curves. Therefore, both types of field are called global fields. In accordance with the philosophy laid out above, they can be studied at a local level first, that is to say, by looking at the corresponding local fields. For number fields the local fields are the completions of at all places, including the archimedean ones (see local analysis). For function fields, the local fields are completions of the local rings at all points of the curve for function fields.
Many results valid for function fields also hold, at least if reformulated properly, for number fields. However, the study of number fields often poses difficulties and phenomena not encountered in function fields. For example, in function fields, there is no dichotomy into non-archimedean and archimedean places. Nonetheless, function fields often serves as a source of intuition what should be expected in the number field case.
Hasse principle
A prototypical question, posed at a global level, is whether some polynomial equation has a solution in If this is the case, this solution is also a solution in all completions. The local-global principle or Hasse principle asserts that for quadratic equations, the converse holds, as well. Thereby, checking whether such an equation has a solution can be done on all the completions of which is often easier, since analytic methods (classical analytic tools such as intermediate value theorem at the archimedean places and p-adic analysis at the nonarchimedean places) can be used. This implication does not hold, however, for more general types of equations. However, the idea of passing from local data to global ones proves fruitful in class field theory, for example, where local class field theory is used to obtain global insights mentioned above. This is also related to the fact that the Galois groups of the completions Kv can be explicitly determined, whereas the Galois groups of global fields, even of are far less understood.
Adeles and ideles
In order to assemble local data pertaining to all local fields attached to the adele ring is set up. A multiplicative variant is referred to as ideles.
| Mathematics | Other | null |
1350158 | https://en.wikipedia.org/wiki/Giant%20Magellan%20Telescope | Giant Magellan Telescope | The Giant Magellan Telescope (GMT) is a ground-based, extremely large telescope currently under construction at Las Campanas Observatory in Chile's Atacama Desert. With a primary mirror diameter of 25.4 meters, it is expected to be the largest Gregorian telescope ever built, observing in optical and mid-infrared wavelengths (320–25,000 nm). Commissioning of the telescope is anticipated in the early 2030s.
The GMT will feature seven of the world's largest mirrors, collectively providing a light-collecting area of 368 square meters. It is expected to have a resolving power approximately 10 times greater than the Hubble Space Telescope and four times greater than the James Webb Space Telescope. However, it will not be able to observe in the same infrared frequencies as space-based telescopes. The GMT will be used to explore a wide range of astrophysical phenomena, including the search for signs of life on exoplanets and the study of the cosmic origins of chemical elements.
The casting of the GMT's primary mirrors began in 2005, and construction at the site started in 2015. By 2023, all seven primary mirrors had been cast, the first of seven adaptive secondary mirrors was under construction, and the telescope mount was in the manufacturing stage. Other subsystems of the telescope were in the final stages of design.
The project, with an estimated cost of USD $2 billion, is being developed by the GMTO Corporation, a consortium of research institutions from seven countries: Australia, Brazil, Chile, Israel, South Korea, Taiwan, and the United States.
Site
The telescope is located at Las Campanas Observatory, which is also home to the Magellan Telescopes. The observatory is situated approximately north-northeast of La Serena, and south of Copiapó, at an altitude of . The site has been owned by the Carnegie Institution for Science since 1960.
Las Campanas was selected as the location for the GMT due to its exceptional astronomical seeing conditions and clear weather throughout much of the year. The sparse population in the surrounding Atacama Desert, combined with favorable geographical conditions, ensures minimal atmospheric and light pollution. This makes the area one of the best locations on Earth for long-term astronomical observation. The observatory's southern hemisphere location also provides access to significant astronomical targets, including the galactic center of the Milky Way, the nearest supermassive black hole (Sagittarius A*), the nearest star to the Sun (Proxima Centauri), the Magellanic Clouds, and numerous nearby galaxies and exoplanets.
Design and status
The Giant Magellan Telescope’s Gregorian design will produce the highest possible image resolution of the universe over the widest field of view with only two light collecting surfaces, making it the most optically proficient of all extremely large telescopes in the 30-meter-class.
Table: Performance Specifications
Site preparation began with the first blast to level the mountain peak on March 23, 2012. In November 2015, construction was started at the site, with a ground-breaking ceremony. In January 2018, WSP was awarded the contract to manage construction of the Giant Magellan Telescope.
The casting of the first mirror, in a rotating furnace, was completed on November 3, 2005. A third segment was cast in August 2013, the fourth in September 2015, the fifth in 2017, the sixth in 2021, and the last in 2023.
Polishing of the first mirror was completed in November 2012.
Ingersoll Machine Tools finished constructing a manufacturing facility to manufacture the Giant Magellan Telescope mount in Rockford, Illinois in December 2021. As of 2022, construction of the telescope mount was underway. The structure is expected to be delivered to Chile at the end of 2025.
Enclosure
The Giant Magellan Telescope enclosure is a 65-meter-tall structure that shelters the telescope’s mirrors and components from the extreme weather and earthquakes in the Atacama Desert, Chile. The 4,800-ton enclosure can complete a full rotation in a little more than three minutes and is designed with a closed-cycle forced-air convection system to maintain a thermal equilibrium within the telescope enclosure and reduce ambient thermal gradients across the primary mirror surface.
The enclosure design provides the telescope pier with a seismic isolation system that can survive the strongest earthquakes expected over the 50-year lifetime of the observatory and will allow the telescope to quickly return to operations after the more frequent, but less intense seismic events that are experienced several times per month.
In March 2022, engineering and architecture firm IDOM was awarded the contract to finalize the telescope’s enclosure design by 2024.
Telescope Mount
The telescope mount structure is a 39 meters tall alt-azimuth design that will stand on a pier that is 22 meters in diameter. The structure will weigh 1,800 tons without mirrors and instruments. With mirrors and instruments, it will weigh 2,100 tons. This structure will float on a film of oil (50 microns thick), being supported by a number of hydrostatic bearings to allow the telescope mount to glide frictionlessly in three degrees of freedom.
In October 2019, GMTO Corporation announced the signing of a contract with German company MT Mechatronics (subsidiary of OHB SE) and Illinois-based Ingersoll Machine Tools, to design, build and install the Giant Magellan Telescope’s structure. Ingersoll Machine Tools finished constructing a 40,000 square foot facility to manufacture the Giant Magellan Telescope mount in Rockford, Illinois in December 2021. As of 2022, construction of the telescope mount was underway and is expected to be completed in 2025.
The telescope mount consists of seven “cells” that hold and protect the telescope’s 18-ton primary mirrors. The mirror support system does not have a traditional internal load-carrying frame. Instead, the strength comes from its unique shape and external shell. This allows the telescope mount to have a compact and lightweight design for its size. It also makes the telescope extremely stiff and stable so that it can resist image quality interruptions from wind and mechanical vibrations.
The “cell” primary mirror support system contains “active optics” with pneumatic actuators that will push on the back of the primary mirrors to correct for the effects of gravity and temperature variations on the seven, 8.4 meter diameter primary mirrors. In addition, fourteen air handler units using CO2 based refrigeration – the first system of its kind used for telescopes – are mounted to the interior of the mirror support system to circulate the air.
A closed-cycle forced-air convection system is used to maintain a thermal equilibrium within the telescope enclosure and reduce thermal gradients across the primary mirror surface.
As a precursor to the fabrication of the seven mirror support systems, a full-scale prototype has also been built to validate design decisions and demonstrate the performance.
In April 2023, OHB Italia S.p.A. finished manufacturing and testing the first of seven mirror covers for the Giant Magellan. In just over two minutes, the covers will retract in unison to protect the world’s largest mirrors when not in use.
Primary mirrors
The telescope will use seven of the world's largest mirrors as primary mirror segments, each in diameter. These segments will then be arranged with one mirror in the center and the other six arranged symmetrically around it. The challenge is that the outer six mirror segments will be off-axis, and although identical to each other, will not be individually radially symmetrical, necessitating a modification of the usual polishing and testing procedures.
The mirrors are being constructed by the University of Arizona's Steward Observatory Richard F. Caris Mirror Lab.
The casting of each mirror uses 20 tons of E6 borosilicate glass from the Ohara Corporation of Japan and takes about 12–13 weeks. After being cast, they need to cool for about six months. Each takes approximately 4 years to cast and polish, obtaining a finish that is so smooth that the highest peaks and valleys are smaller than 1/1000 of the width of a human hair.
As this was an off-axis segment, a wide array of new optical tests and laboratory infrastructure had to be developed to polish the mirror.
The intention is to build seven identical off-axis mirrors, so that a spare is available to substitute for a segment being recoated, a 1–2 week (per segment) process required every 1–2 years. While the complete telescope will use seven mirrors, it is planned to begin operation with four mirrors.
Segments 1–3 are complete. Segments 4–6 are undergoing polishing and testing. Segment 7 was planned for casting in 2023.
The primary mirror array will have a focal ratio (focal length divided by diameter) of f/0.71. For an individual segment – one third that diameter – this results in a focal ratio of f/2.14. The overall focal ratio of the complete telescope will be f/8 and the optical prescription is an aplanatic Gregorian telescope. Like all modern large telescopes it will make use of adaptive optics.
Scientists expect very high quality images due to the very large aperture and advanced adaptive optics. Image quality is projected at a 20 arcminute field of view, correctable from 0–20 arcminutes. The images will be sharp enough to resolve the torch engraved on a U.S. dime from nearly 160 kilometers (100 miles) away and expected to exceed that of the Hubble Space Telescope.
The Carnegie Observatories office in Pasadena has an outline of the Giant Magellan primary mirror array painted in its parking lot. It is easily visible in satellite imagery at .
Secondary mirrors and adaptive optics
The Giant Magellan Telescope’s Adaptive Secondary Mirror consists of seven segments about 1.1 meters in diameter. They are deformable “adaptive optics” mirrors tasked with correcting the atmospheric distortion of the light gathered by the telescope. The Adaptive Secondary Mirrors consist of a thin sheet of glass that is bonded to more than 7000 independently controlled voice coil actuators. Each segment can deform/reshape their 2-millimeter-thick surface 2,000 times per second to correct for the optical blurring effect of Earth’s atmosphere.
The first segment is under construction as of August 2022 and will be completed in 2024.
The Giant Magellan Telescope will have three modes of adaptive optics.
Ground Layer Adaptive Optics (GLAO): The Gregorian design and integrated adaptive optics system allow ground layer atmospheric turbulence to be corrected over a wide field of view, improving natural seeing image quality by 20–50% from the visible to near-infrared (with the greatest improvements at red wavelengths). The Giant Magellan uses wavefront sensors that allow any instrument to receive GLAO corrected images.
Natural Guide Star Adaptive Optics (NGAO): NGAO uses a single natural guide star (bright) to deliver diffraction limited, high Strehl ratio images (>75 % Strehl in the K band) at wavelengths from 0.6 μm into the mid-infrared over a field of view a few arcseconds in diameter.
Laser Tomography Adaptive Optics (LTAO): LTAO uses six laser guide stars and a single natural guide star (faint) to extend diraction-limited performance to nearly the full sky with moderate Strehl ratio (>30 % Strehl in the H band) at infrared wavelengths over a much wider field of view than NGAO (~20” at 1μm) and is available to any instrument designed to use this mode.
The Giant Magellan is the only 30-meter class telescope with ground layer adaptive optics over a full field of view.
Science instruments
The Giant Magellan Telescope's Gregorian design can accommodate up to 10 visible to mid-infrared science instruments, from wide field imagers and spectrographs that reach hundreds of objects at one time, to high-resolution imagers and spectrographs that can study exoplanets and even find biosignatures. Each science instrument is designed to take advantage of the telescope’s four observing modes.
The telescope will have an advanced fiber-optic system that uses tiny robotic positioners to expand the capabilities of the spectrographs by allowing them to access the highest resolution of all telescopes in the 30-meter class over a full field of view of 20 arcminutes. Using this system, it is possible to observe multiple targets over the entire field with one or more of the spectrographs This enables the telescope to see fainter objects with unrivaled resolution and sensitivity. The advantage is extremely powerful for spectroscopy and the precise measurements of distances, dynamics, chemistry, and masses of celestial objects in deep space.
GMT-Consortium Large Earth Finder (G-CLEF) – an optical band echelle spectrograph
GMT Multi-object Astronomical and Cosmological Spectrograph (GMACS) – a visible multi-object spectrograph
GMT Integral-Field Spectrograph (GMTIFS) – a near-IR IFU and AO imager
GMT Near-IR Spectrograph (GMTNIRS) – a near-IR spectrograph
The Many Instrument Fiber System (MANIFEST) – a facility fiber system
Additionally the Commissioning Camera (ComCam) will be used to validate the Ground Layer Adaptive Optics performance of the GMT facility Adaptive Optics System.
Science drivers for the Giant Magellan Telescope include studying planets in the habitable zones of their parent star in the search for life; the nature of dark matter, dark energy, gravity, and many other aspects of fundamental physics; the formation and evolution of the first stars and galaxies; and how black holes and galaxies co-evolve.
Comparison
The Giant Magellan Telescope is one of a new class of telescopes called extremely large telescopes with each design being much larger than existing ground-based telescopes. Other planned extremely large telescopes include the Extremely Large Telescope and the Thirty Meter Telescope.
Organizations
The Giant Magellan Telescope is the work of the GMTO Corporation, an international consortium of research institutions representing seven countries from Australia, Brazil, Chile, Israel, South Korea, Taiwan, and the United States. The GMTO Corporation is a nonprofit 501(c)(3) organization with offices in Pasadena, California and Santiago, Chile. The organization has an established relationship with the Chilean government, having been recognized through a presidential decree as an “international organization” in Chile. The telescope operates under a cooperative agreement with the University of Chile, granting 10% of the observing time to astronomers working at Chilean institutions. The following organizations are members of the consortium developing the telescope.
Academia Sinica Institute of Astronomy and Astrophysics
University of Arizona
Arizona State University
Astronomy Australia Limited
Australian National University
Carnegie Institution for Science
FAPESP
Harvard University
Korea Astronomy and Space Science Institute (한국천문연구원) (KASI)
Northwestern University
Smithsonian Institution
Texas A&M University
University of Texas at Austin
University of Chicago
Weizmann Institute of Science
The Giant Magellan Telescope is a part of the US Extremely Large Telescope Program (US-ELTP), as of 2018 . The US-ELTP will provide US-based astronomers with U.S. National Science Foundation funded all-sky observing access to both the Giant Magellan Telescope and Thirty Meter Telescope. The program was ranked as the highest ground-based priority in the National Academy of Sciences Astro2020 Decadal Survey which noted that the US-ELTP will provide “observational capabilities unmatched in space or the ground and open an enormous discovery space for new observations and discoveries not yet anticipated."
| Technology | Ground-based observatories | null |
1352320 | https://en.wikipedia.org/wiki/Interplanetary%20medium | Interplanetary medium | The interplanetary medium (IPM) or interplanetary space consists of the mass and energy which fills the Solar System, and through which all the larger Solar System bodies, such as planets, dwarf planets, asteroids, and comets, move. The IPM stops at the heliopause, outside of which the interstellar medium begins. Before 1950, interplanetary space was widely considered to either be an empty vacuum, or consisting of "aether".
Composition and physical characteristics
The interplanetary medium includes interplanetary dust, cosmic rays, and hot plasma from the solar wind. The density of the interplanetary medium is very low, decreasing in inverse proportion to the square of the distance from the Sun. It is variable, and may be affected by magnetic fields and events such as coronal mass ejections. Typical particle densities in the interplanetary medium are about 5-40 particles/cm, but exhibit substantial variation. In the vicinity of the Earth, it contains about 5 particles/cm, but values as high as 100 particles/cm have been observed.
The temperature of the interplanetary medium varies through the solar system. Joseph Fourier estimated that interplanetary medium must have temperatures comparable to those observed at Earth's poles, but on faulty grounds: lacking modern estimates of atmospheric heat transport, he saw no other means to explain the relative consistency of Earth's climate. A very hot interplanetary medium remained a minor position among geophysicists as late as 1959, when Chapman proposed a temperature on the order of 10000 K, but observation in Low Earth orbit of the exosphere soon contradicted his position. In fact, both Fourier and Chapman's final predictions were correct: because the interplanetary medium is so rarefied, it does not exhibit thermodynamic equilibrium. Instead, different components have different temperatures. The solar wind exhibits temperatures consistent with Chapman's estimate in cislunar space, and dust particles near Earth's orbit exhibit temperatures , averaging about . In general, the solar wind temperature decreases proportional to the inverse-square of the distance to the Sun; the temperature of the dust decreases proportional to the inverse cube root of the distance. For dust particles within the asteroid belt, typical temperatures range from at 2.2 AU down to at 3.2 AU.
Since the interplanetary medium is a plasma, or gas of ions, the interplanetary medium has the characteristics of a plasma, rather than a simple gas. For example, it carries the Sun's magnetic field with it, is highly electrically conductive (resulting in the heliospheric current sheet), forms plasma double layers where it comes into contact with a planetary magnetosphere or at the heliopause, and exhibits filamentation (such as in aurorae).
The plasma in the interplanetary medium is also responsible for the strength of the Sun's magnetic field at the orbit of the Earth being over 100 times greater than originally anticipated. If space were a vacuum, then the Sun's tesla magnetic dipole field would reduce with the cube of the distance to about tesla. But satellite observations show that it is about 100 times greater at around tesla. Magnetohydrodynamic (MHD) theory predicts that the motion of a conducting fluid (e.g., the interplanetary medium) in a magnetic field induces electric currents which in turn generate magnetic fields, and in this respect it behaves like an MHD dynamo.
Extent of the interplanetary medium
The outer edge of the heliosphere is the boundary between the flow of the solar wind and the interstellar medium. This boundary is known as the heliopause and is believed to be a fairly sharp transition of the order of 110 to 160 astronomical units from the Sun. The interplanetary medium thus fills the roughly spherical volume contained within the heliopause.
Interaction with planets
How the interplanetary medium interacts with planets depends on whether they have magnetic fields or not. Bodies such as the Moon have no magnetic field and the solar wind can impact directly on their surface. Over billions of years, the lunar regolith has acted as a collector for solar wind particles, and so studies of rocks from the lunar surface can be valuable in studies of the solar wind.
High-energy particles from the solar wind impacting on the lunar surface also cause it to emit faintly at X-ray wavelengths.
Planets with their own magnetic field, such as the Earth and Jupiter, are surrounded by a magnetosphere within which their magnetic field is dominant over the Sun's. This disrupts the flow of the solar wind, which is channelled around the magnetosphere. Material from the solar wind can "leak" into the magnetosphere, causing aurorae and also populating the Van Allen radiation belts with ionised material.
Observable phenomena of the interplanetary medium
The interplanetary medium is responsible for several optical phenomena visible from Earth. Zodiacal light is a broad band of faint light sometimes seen after sunset and before sunrise, stretched along the ecliptic and appearing brightest near the horizon. This glow is caused by sunlight scattered by dust particles in the interplanetary medium between Earth and the Sun.
A similar phenomenon centered at the antisolar point, gegenschein is visible in a naturally dark, moonless night sky. Much fainter than zodiacal light, this effect is caused by sunlight backscattered by dust particles beyond Earth's orbit.
History
The term "interplanetary" appears to have been first used in print in 1691 by the scientist Robert Boyle: "The air is different from the æther (or vacuum) in the... interplanetary spaces" Boyle Hist. Air. In 1898, American astronomer Charles Augustus Young wrote: "Inter-planetary space is a vacuum, far more perfect than anything we can produce by artificial means..." (The Elements of Astronomy, Charles Augustus Young, 1898).
The notion that space is considered to be a vacuum filled with an "aether", or just a cold, dark vacuum continued up until the 1950s. Tufts University Professor of astronomy, Kenneth R. Lang, writing in 2000 noted, "Half a century ago, most people visualized our planet as a solitary sphere traveling in a cold, dark vacuum of space around the Sun". In 2002, Akasofu stated "The view that interplanetary space is a vacuum into which the Sun intermittently emitted corpuscular streams was changed radically by Ludwig Biermann (1951, 1953) who proposed on the basis of comet tails, that the Sun continuously blows its atmosphere out in all directions at supersonic speed" (Syun-Ichi Akasofu, Exploring the Secrets of the Aurora, 2002)
| Physical sciences | Solar System | null |
1352428 | https://en.wikipedia.org/wiki/Modulo | Modulo | In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, called the modulus of the operation.
Given two positive numbers and , modulo (often abbreviated as ) is the remainder of the Euclidean division of by , where is the dividend and is the divisor.
For example, the expression "5 mod 2" evaluates to 1, because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0, because 9 divided by 3 has a quotient of 3 and a remainder of 0.
Although typically performed with and both being integers, many computing systems now allow other types of numeric operands. The range of values for an integer modulo operation of is 0 to . mod 1 is always 0.
When exactly one of or is negative, the basic definition breaks down, and programming languages differ in how these values are defined.
Variants of the definition
In mathematics, the result of the modulo operation is an equivalence class, and any member of the class may be chosen as representative; however, the usual representative is the least positive residue, the smallest non-negative integer that belongs to that class (i.e., the remainder of the Euclidean division). However, other conventions are possible. Computers and calculators have various ways of storing and representing numbers; thus their definition of the modulo operation depends on the programming language or the underlying hardware.
In nearly all computing systems, the quotient and the remainder of divided by satisfy the following conditions:
This still leaves a sign ambiguity if the remainder is non-zero: two possible choices for the remainder occur, one negative and the other positive; that choice determines which of the two consecutive quotients must be used to satisfy equation (1). In number theory, the positive remainder is always chosen, but in computing, programming languages choose depending on the language and the signs of or . Standard Pascal and ALGOL 68, for example, give a positive remainder (or 0) even for negative divisors, and some programming languages, such as C90, leave it to the implementation when either of or is negative (see the table under for details). Some systems leave modulo 0 undefined, though others define it as .
If both the dividend and divisor are positive, then the truncated, floored, and Euclidean definitions agree.
If the dividend is positive and the divisor is negative, then the truncated and Euclidean definitions agree.
If the dividend is negative and the divisor is positive, then the floored and Euclidean definitions agree.
If both the dividend and divisor are negative, then the truncated and floored definitions agree.
As described by Leijen,
However, truncated division satisfies the identity .
Notation
Some calculators have a function button, and many programming languages have a similar function, expressed as , for example. Some also support expressions that use "%", "mod", or "Mod" as a modulo or remainder operator, such as or .
For environments lacking a similar function, any of the three definitions above can be used.
Common pitfalls
When the result of a modulo operation has the sign of the dividend (truncated definition), it can lead to surprising mistakes.
For example, to test if an integer is odd, one might be inclined to test if the remainder by 2 is equal to 1:
bool is_odd(int n) {
return n % 2 == 1;
}
But in a language where modulo has the sign of the dividend, that is incorrect, because when (the dividend) is negative and odd, mod 2 returns −1, and the function returns false.
One correct alternative is to test that the remainder is not 0 (because remainder 0 is the same regardless of the signs):
bool is_odd(int n) {
return n % 2 != 0;
}
Or with the binary arithmetic:
bool is_odd(int n) {
return n & 1;
}
Performance issues
Modulo operations might be implemented such that a division with a remainder is calculated each time. For special cases, on some hardware, faster alternatives exist. For example, the modulo of powers of 2 can alternatively be expressed as a bitwise AND operation (assuming is a positive integer, or using a non-truncating definition):
x % 2n == x & (2n - 1)
Examples:
In devices and software that implement bitwise operations more efficiently than modulo, these alternative forms can result in faster calculations.
Compiler optimizations may recognize expressions of the form where is a power of two and automatically implement them as , allowing the programmer to write clearer code without compromising performance. This simple optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend (including C), unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, whereas will always be positive. For these languages, the equivalence x % 2n == x < 0 ? x | ~(2n - 1) : x & (2n - 1) has to be used instead, expressed using bitwise OR, NOT and AND operations.
Optimizations for general constant-modulus operations also exist by calculating the division first using the constant-divisor optimization.
Properties (identities)
Some modulo operations can be factored or expanded similarly to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange. The properties involving multiplication, division, and exponentiation generally require that and are integers.
Identity:
.
for all positive integer values of .
If is a prime number which is not a divisor of , then , due to Fermat's little theorem.
Inverse:
.
denotes the modular multiplicative inverse, which is defined if and only if and are relatively prime, which is the case when the left hand side is defined: .
Distributive:
.
.
Division (definition): , when the right hand side is defined (that is when and are coprime), and undefined otherwise.
Inverse multiplication: .
In programming languages
In addition, many computer systems provide a functionality, which produces the quotient and the remainder at the same time. Examples include the x86 architecture's instruction, the C programming language's function, and Python's function.
Generalizations
Modulo with offset
Sometimes it is useful for the result of modulo to lie not between 0 and , but between some number and . In that case, is called an offset and is particularly common.
There does not seem to be a standard notation for this operation, so let us tentatively use . We thus have the following definition: just in case and . Clearly, the usual modulo operation corresponds to zero offset: .
The operation of modulo with offset is related to the floor function as follows:
To see this, let . We first show that . It is in general true that for all integers ; thus, this is true also in the particular case when ; but that means that , which is what we wanted to prove. It remains to be shown that . Let and be the integers such that with (see Euclidean division). Then , thus . Now take and add to both sides, obtaining . But we've seen that , so we are done.
The modulo with offset is implemented in Mathematica as .
Implementing other modulo definitions using truncation
Despite the mathematical elegance of Knuth's floored division and Euclidean division, it is generally much more common to find a truncated division-based modulo in programming languages. Leijen provides the following algorithms for calculating the two divisions given a truncated integer division:
/* Euclidean and Floored divmod, in the style of C's ldiv() */
typedef struct {
/* This structure is part of the C stdlib.h, but is reproduced here for clarity */
long int quot;
long int rem;
} ldiv_t;
/* Euclidean division */
inline ldiv_t ldivE(long numer, long denom) {
/* The C99 and C++11 languages define both of these as truncating. */
long q = numer / denom;
long r = numer % denom;
if (r < 0) {
if (denom > 0) {
q = q - 1;
r = r + denom;
} else {
q = q + 1;
r = r - denom;
}
}
return (ldiv_t){.quot = q, .rem = r};
}
/* Floored division */
inline ldiv_t ldivF(long numer, long denom) {
long q = numer / denom;
long r = numer % denom;
if ((r > 0 && denom < 0) || (r < 0 && denom > 0)) {
q = q - 1;
r = r + denom;
}
return (ldiv_t){.quot = q, .rem = r};
}
For both cases, the remainder can be calculated independently of the quotient, but not vice versa. The operations are combined here to save screen space, as the logical branches are the same.
| Mathematics | Basics | null |
1352555 | https://en.wikipedia.org/wiki/Wheel%20and%20axle | Wheel and axle | The wheel and axle is a simple machine, consisting of a wheel attached to a smaller axle so that these two parts rotate together, in which a force is transferred from one to the other. The wheel and axle can be viewed as a version of the Lever, with a drive force applied tangentially to the perimeter of the wheel, and a load force applied to the axle supported in a bearing, which serves as a fulcrum.
History
The Halaf culture of 6500–5100 BCE has been credited with the earliest depiction of a wheeled vehicle, but this is doubtful as there is no evidence of Halafians using either wheeled vehicles or even pottery wheels.
One of the first applications of the wheel to appear was the potter's wheel, used by prehistoric cultures to fabricate clay pots. The earliest type, known as "tournettes" or "slow wheels", were known in the Middle East by the 5th millennium BCE. One of the earliest examples was discovered at Tepe Pardis, Iran, and dated to 5200–4700 BCE. These were made of stone or clay and secured to the ground with a peg in the center, but required significant effort to turn. True potter's wheels, which are freely-spinning and have a wheel and axle mechanism, were developed in Mesopotamia (Iraq) by 4200–4000 BCE. The oldest surviving example, which was found in Ur (modern day Iraq), dates to approximately 3100 BCE.
Evidence of wheeled vehicles appeared by the late 4th millennium BCE. Depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk, in the Sumerian civilization of Mesopotamia, are dated between 3700–3500 BCE. In the second half of the 4th millennium BCE, evidence of wheeled vehicles appeared near-simultaneously in the Northern Caucasus (Maykop culture) and Eastern Europe (Cucuteni–Trypillian culture). Depictions of a wheeled vehicle appeared between 3500 and 3350 BCE in the Bronocice clay pot excavated in a Funnelbeaker culture settlement in southern Poland. In nearby Olszanica, a 2.2 m wide door was constructed for wagon entry; this barn was 40 m long and had 3 doors. Surviving evidence of a wheel–axle combination, from Stare Gmajne near Ljubljana in Slovenia (Ljubljana Marshes Wooden Wheel), is dated within two standard deviations to 3340–3030 BCE, the axle to 3360–3045 BCE. Two types of early Neolithic European wheel and axle are known; a circumalpine type of wagon construction (the wheel and axle rotate together, as in Ljubljana Marshes Wheel), and that of the Baden culture in Hungary (axle does not rotate). They both are dated to c. 3200–3000 BCE. Historians believe that there was a diffusion of the wheeled vehicle from the Near East to Europe around the mid-4th millennium BCE.
An early example of a wooden wheel and its axle was found in 2002 at the Ljubljana Marshes some 20 km south of Ljubljana, the capital of Slovenia. According to radiocarbon dating, it is between 5,100 and 5,350 years old. The wheel was made of ash and oak and had a radius of 70 cm and the axle was 120 cm long and made of oak.
In China, the earliest evidence of spoked wheels comes from Qinghai in the form of two wheel hubs from a site dated between 2000 and 1500 BCE.
In Roman Egypt, Hero of Alexandria identified the wheel and axle as one of the simple machines used to lift weights. This is thought to have been in the form of the windlass which consists of a crank or pulley connected to a cylindrical barrel that provides mechanical advantage to wind up a rope and lift a load such as a bucket from the well.
The wheel and axle was identified as one of six simple machines by Renaissance scientists, drawing from Greek texts on technology.
Mechanical advantage
The simple machine called a wheel and axle refers to the assembly formed by two disks, or cylinders, of different diameters mounted so they rotate together around the same axis. The thin rod which needs to be turned is called the axle and the wider object fixed to the axle, on which we apply force is called the wheel. A tangential force applied to the periphery of the large disk can exert a larger force on a load attached to the axle, achieving mechanical advantage. When used as the wheel of a wheeled vehicle the smaller cylinder is the axle of the wheel, but when used in a windlass, winch, and other similar applications (see medieval mining lift to right) the smaller cylinder may be separate from the axle mounted in the bearings. It cannot be used separately.
Assuming the wheel and axle does not dissipate or store energy, that is it has no friction or elasticity, the power input by the force applied to the wheel must equal the power output at the axle. As the wheel and axle system rotates around its bearings, points on the circumference, or edge, of the wheel move faster than points on the circumference, or edge, of the axle. Therefore, a force applied to the edge of the wheel must be less than the force applied to the edge of the axle, because power is the product of force and velocity.
Let a and b be the distances from the center of the bearing to the edges of the wheel A and the axle B. If the input force FA is applied to the edge of the wheel A and the force FB at the edge of the axle B is the output, then the ratio of the velocities of points A and B is given by a/b, so the ratio of the output force to the input force, or mechanical advantage, is given by
The mechanical advantage of a simple machine like the wheel and axle is computed as the ratio of the resistance to the effort. The larger the ratio the greater the multiplication of force (torque) created or distance achieved. By varying the radii of the axle and/or wheel, any amount of mechanical advantage may be gained. In this manner, the size of the wheel may be increased to an inconvenient extent. In this case a system or combination of wheels (often toothed, that is, gears) are used. As a wheel and axle is a type of lever, a system of wheels and axles is like a compound lever.
On a powered wheeled vehicle the transmission exerts a force on the axle which has a smaller radius than the wheel. The mechanical advantage is therefore much less than 1. The wheel and axle of a car are therefore not representative of a simple machine (whose purpose is to increase the force). The friction between wheel and road is actually quite low, so even a small force exerted on the axle is sufficient. The actual advantage lies in the large rotational speed at which the axle is rotating thanks to the transmission.
Ideal mechanical advantage
The mechanical advantage of a wheel and axle with no friction is called the ideal mechanical advantage (IMA). It is calculated with the following formula:
Actual mechanical advantage
All actual wheels have friction, which dissipates some of the power as heat. The actual mechanical advantage (AMA) of a wheel and axle is calculated with the following formula:
where
is the efficiency of the wheel, the ratio of power output to power input
| Technology | Basics_8 | null |
1353433 | https://en.wikipedia.org/wiki/Phalanx%20bone | Phalanx bone | The phalanges (: phalanx ) are digital bones in the hands and feet of most vertebrates. In primates, the thumbs and big toes have two phalanges while the other digits have three phalanges. The phalanges are classed as long bones.
Structure
The phalanges are the bones that make up the fingers of the hand and the toes of the foot. There are 56 phalanges in the human body, with fourteen on each hand and foot. Three phalanges are present on each finger and toe, with the exception of the thumb and big toe, which possess only two. The middle and far phalanges of the fifth toes are often fused together (symphalangism). The phalanges of the hand are commonly known as the finger bones. The phalanges of the foot differ from the hand in that they are often shorter and more compressed, especially in the proximal phalanges, those closest to the torso.
A phalanx is named according to whether it is proximal, middle, or distal and its associated finger or toe. The proximal phalanges are those that are closest to the hand or foot. In the hand, the prominent, knobby ends of the phalanges are known as knuckles. The proximal phalanges join with the metacarpals of the hand or metatarsals of the foot at the metacarpophalangeal joint or metatarsophalangeal joint. The intermediate phalanx is not only intermediate in location, but usually also in size. The thumb and large toe do not possess a middle phalanx. The distal phalanges are the bones at the tips of the fingers or toes. The proximal, intermediate, and distal phalanges articulate with one another through interphalangeal joints of hand and interphalangeal joints of the foot.
Bone anatomy
Each phalanx consists of a central part, called the body, and two extremities.
The body is flat on either side, concave on the palmar surface, and convex on the dorsal surface. Its sides are marked with rough areas giving attachment to fibrous sheaths of flexor tendons. It tapers from above downwards.
The proximal extremities of the bones of the first row present oval, concave articular surfaces, broader from side to side than from front to back. The proximal extremity of each of the bones of the second and third rows presents a double concavity separated by a median ridge.
The distal extremities are smaller than the proximal, and each ends in two condyles (knuckles) separated by a shallow groove; the articular surface extends farther on the palmar than on the dorsal surface, a condition best marked in the bones of the first row.
In the foot, the proximal phalanges have a body that is compressed from side to side, convex above, and concave below. The base is concave, and the head presents a trochlear surface for articulation with the second phalanx. The middle are remarkably small and short, but rather broader than the proximal. The distal phalanges, as compared with the distal phalanges of the finger, are smaller and are flattened from above downward; each presents a broad base for articulation with the corresponding bone of the second row, and an expanded distal extremity for the support of the nail and end of the toe.
Distal phalanx
In the hand, the distal phalanges are flat on their palmar surface, small, and with a roughened, elevated surface of horseshoe form on the palmar surface, supporting the finger pulp. The flat, wide expansions found at the tips of the distal phalanges are called "apical tufts". They support the fingertip pads and nails. The phalanx of the thumb has a pronounced insertion for the flexor pollicis longus (asymmetric towards the radial side), an ungual fossa, and a pair of unequal ungual spines (the ulnar being more prominent). This asymmetry is necessary to ensure that the thumb pulp is always facing the pulps of the other digits, an osteological configuration which provides the maximum contact surface with held objects.
In the foot, the distal phalanges are flat on their dorsal surface. It is largest proximally and tapers to the distal end. The proximal part of the phalanx presents a broad base for articulation with the middle phalanx, and an expanded distal extremity for the support of the nail and end of the toe. The phalanx ends in a crescent-shaped rough cap of bone epiphysis — the apical tuft (or ungual tuberosity/process) which covers a larger portion of the phalanx on the volar side than on the dorsal side. Two lateral ungual spines project proximally from the apical tuft. Near the base of the shaft are two lateral tubercles. Between these a V-shaped ridge extending proximally serves for the insertion of the flexor pollicis longus. Another ridge at the base serves for the insertion of the extensor aponeurosis. The flexor insertion is sided by two fossae — the ungual fossa distally and the proximopalmar fossa proximally.
Development
The number of phalanges in animals is often expressed as a "phalangeal formula" that indicates the numbers of phalanges in digits, beginning from the innermost medial or proximal. For example, humans have a 2-3-3-3-3 formula for the hand, meaning that the thumb has two phalanges, whilst the other fingers each have three.
In the distal phalanges of the hand the centres for the bodies appear at the distal extremities of the phalanges, instead of at the middle of the bodies, as in the other phalanges. Moreover, of all the bones of the hand, the distal phalanges are the first to ossify.
Function
The distal phalanges of ungulates carry and shape nails and claws and these in primates are referred to as the ungual phalanges.
History of phalanges
Etymology
The term phalanx or phalanges refers to an ancient Greek army formation in which soldiers stand side by side, several rows deep, like an arrangement of fingers or toes.
Other animals
Most land mammals including humans have a 2-3-3-3-3 formula in both the hands (or paws) and feet. Primitive reptiles usually had the formula 2-3-4-4-5, and this pattern, with some modification, remained in many later reptiles and in the mammal-like reptiles. The phalangeal formula in the flippers of cetaceans (marine mammals) varies widely due to hyperphalangy (the increase in number of phalanx bones in the digits). In humpback whales, for example, the phalangeal formula is 0/2/7/7/3; in pilot whales the formula is 1/10/7/2/1.
In vertebrates, proximal phalanges have a similar placement in the corresponding limbs, be they paw, wing or fin. In many species, they are the longest and thickest phalanx ("finger" bone). The middle phalanx also has a corresponding place in their limbs, whether they be paw, wing, hoof or fin.
The distal phalanges are cone-shaped in most mammals, including most primates, but relatively wide and flat in humans.
Primates
The morphology of the distal phalanges of human thumbs closely reflects an adaptation for a refined precision grip with pad-to-pad contact. This has traditionally been associated with the advent of stone tool-making. However, the intrinsic hand proportions of australopiths and the resemblance between human hands and the short hands of Miocene apes, suggest that human hand proportions are largely plesiomorphic (as found in ancestral species) — in contrast to the derived elongated hand pattern and poorly developed thumb musculature of other extant hominoids.
In Neanderthals, the apical tufts were expanded and more robust than in modern and early upper Paleolithic humans. A proposal that Neanderthal distal phalanges was an adaptation to colder climate (than in Africa) is not supported by a recent comparison showing that in hominins, cold-adapted populations possessed smaller apical tufts than do warm-adapted populations.
In non-human, living primates the apical tufts vary in size, but they are never larger than in humans. Enlarged apical tufts, to the extent they actually reflect expanded digital pulps, may have played a significant role in enhancing friction between the hand and held objects during Neolithic toolmaking.
Among non-human primates phylogenesis and style of locomotion appear to play a role in apical tuft size. Suspensory primates and New World monkeys have the smallest apical tufts, while terrestrial quadrupeds and Strepsirrhines have the largest.
A study of the fingertip morphology of four small-bodied New World monkey species, indicated a correlation between increasing small-branch foraging and reduced flexor and extensor tubercles in distal phalanges and broadened distal parts of distal phalanges, coupled with expanded apical pads and developed epidermal ridges. This suggests that widened distal phalanges were developed in arboreal primates, rather than in quadrupedal terrestrial primates.
Cetaceans
Whales exhibit hyperphalangy. Hyperphalangy is an increase in the number of phalanges beyond the plesiomorphic mammal condition of three phalanges-per-digit. Hyperphalangy was present among extinct marine reptiles -- ichthyosaurs, plesiosaurs, and mosasaurs -- but not other marine mammals, leaving whales as the only marine mammals to develop this characteristic. The evolutionary process continued over time, and a very derived form of hyperphalangy, with six or more phalanges per digit, evolved convergently in rorqual whales and oceanic dolphins, and was likely associated with another wave of signaling within the interdigital tissues.
Other mammals
In ungulates (hoofed mammals) the forelimb is optimized for speed and endurance by a combination of length of stride and rapid step; the proximal forelimb segments are short with large muscles, while the distal segments are elongated with less musculature. In two of the major groups of ungulates, odd-toed and even-toed ungulates, what remain of the "hands" — the metacarpal and phalangeal bones — are elongated to the extent that they serve little use beyond locomotion. The giraffe, the largest even-toed ungulate, has large terminal phalanges and fused metacarpal bones able to absorb the stress from running.
The sloth spends its life hanging upside-down from branches, and has highly specialized third and fourth digits for the purpose. They have short and squat proximal phalanges with much longer terminal phalanges. They have vestigial second and fifth metacarpals, and their palm extends to the distal interphalangeal joints. The arboreal specialization of these terminal phalanges makes it impossible for the sloth to walk on the ground where the animal has to drag its body with its claws.
Additional images
| Biology and health sciences | Skeletal system | Biology |
2757224 | https://en.wikipedia.org/wiki/Multiple%20integral | Multiple integral | In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, or .
Integrals of a function of two variables over a region in (the real-number plane) are called double integrals, and integrals of a function of three variables over a region in (real-number 3D space) are called triple integrals. For repeated antidifferentiation of a single-variable function, see the Cauchy formula for repeated integration.
Introduction
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the -axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function (on the three-dimensional Cartesian plane where ) and the plane which contains its domain. If there are more variables, a multiple integral will yield hypervolumes of multidimensional functions.
Multiple integration of a function in variables: over a domain is most commonly represented by nested integral signs in the reverse order of execution (the leftmost integral sign is computed last), followed by the function and integrand arguments in proper order (the integral with respect to the rightmost argument is computed last). The domain of integration is either represented symbolically for every argument over each integral sign, or is abbreviated by a variable at the rightmost integral sign:
Since the concept of an antiderivative is only defined for functions of a single real variable, the usual definition of the indefinite integral does not immediately extend to the multiple integral.
Mathematical definition
For , consider a so-called "half-open" -dimensional hyperrectangular domain , defined as
.
Partition each interval into a finite family of non-overlapping subintervals , with each subinterval closed at the left end, and open at the right end.
Then the finite family of subrectangles given by
is a partition of ; that is, the subrectangles are non-overlapping and their union is .
Let be a function defined on . Consider a partition of as defined above, such that is a family of subrectangles and
We can approximate the total -dimensional volume bounded below by the -dimensional hyperrectangle and above by the -dimensional graph of with the following Riemann sum:
where is a point in and is the product of the lengths of the intervals whose Cartesian product is , also known as the measure of .
The diameter of a subrectangle is the largest of the lengths of the intervals whose Cartesian product is . The diameter of a given partition of is defined as the largest of the diameters of the subrectangles in the partition. Intuitively, as the diameter of the partition is restricted smaller and smaller, the number of subrectangles gets larger, and the measure of each subrectangle grows smaller. The function is said to be Riemann integrable if the limit
exists, where the limit is taken over all possible partitions of of diameter at most .
If is Riemann integrable, is called the Riemann integral of over and is denoted
.
Frequently this notation is abbreviated as
.
where represents the -tuple and is the -dimensional volume differential.
The Riemann integral of a function defined over an arbitrary bounded -dimensional set can be defined by extending that function to a function defined over a half-open rectangle whose values are zero outside the domain of the original function. Then the integral of the original function over the original domain is defined to be the integral of the extended function over its rectangular domain, if it exists.
In what follows the Riemann integral in dimensions will be called the multiple integral.
Properties
Multiple integrals have many properties common to those of integrals of functions of one variable (linearity, commutativity, monotonicity, and so on). One important property of multiple integrals is that the value of an integral is independent of the order of integrands under certain conditions. This property is popularly known as Fubini's theorem.
Particular cases
In the case of the integral
is the double integral of on , and if the integral
is the triple integral of on .
Notice that, by convention, the double integral has two integral signs, and the triple integral has three; this is a notational convention which is convenient when computing a multiple integral as an iterated integral, as shown later in this article.
Methods of integration
The resolution of problems with multiple integrals consists, in most cases, of finding a way to reduce the multiple integral to an iterated integral, a series of integrals of one variable, each being directly solvable. For continuous functions, this is justified by Fubini's theorem. Sometimes, it is possible to obtain the result of the integration by direct examination without any calculations.
The following are some simple methods of integration:
Integrating constant functions
When the integrand is a constant function , the integral is equal to the product of and the measure of the domain of integration. If and the domain is a subregion of , the integral gives the area of the region, while if the domain is a subregion of , the integral gives the volume of the region.
Example. Let and
,
in which case
,
since by definition we have:
.
Use of symmetry
When the domain of integration is symmetric about the origin with respect to at least one of the variables of integration and the integrand is odd with respect to this variable, the integral is equal to zero, as the integrals over the two halves of the domain have the same absolute value but opposite signs. When the integrand is even with respect to this variable, the integral is equal to twice the integral over one half of the domain, as the integrals over the two halves of the domain are equal.
Example 1. Consider the function integrated over the domain
,
a disc with radius 1 centered at the origin with the boundary included.
Using the linearity property, the integral can be decomposed into three pieces:
.
The function is an odd function in the variable and the disc is symmetric with respect to the -axis, so the value of the first integral is 0. Similarly, the function is an odd function of , and is symmetric with respect to the -axis, and so the only contribution to the final result is that of the third integral. Therefore the original integral is equal to the area of the disk times 5, or 5.
Example 2. Consider the function and as integration region the ball with radius 2 centered at the origin,
.
The "ball" is symmetric about all three axes, but it is sufficient to integrate with respect to -axis to show that the integral is 0, because the function is an odd function of that variable.
Normal domains on
This method is applicable to any domain for which:
The projection of onto either the -axis or the -axis is bounded by the two values, and
Any line perpendicular to this axis that passes between these two values intersects the domain in an interval whose endpoints are given by the graphs of two functions, and
Such a domain will be here called a normal domain. Elsewhere in the literature, normal domains are sometimes called type I or type II domains, depending on which axis the domain is fibred over. In all cases, the function to be integrated must be Riemann integrable on the domain, which is true (for instance) if the function is continuous.
-axis
If the domain is normal with respect to the -axis, and is a continuous function; then and (both of which are defined on the interval ) are the two functions that determine . Then, by Fubini's theorem:
.
-axis
If is normal with respect to the -axis and is a continuous function; then and (both of which are defined on the interval ) are the two functions that determine . Again, by Fubini's theorem:
.
Normal domains on
If is a domain that is normal with respect to the -plane and determined by the functions and , then
.
This definition is the same for the other five normality cases on . It can be generalized in a straightforward way to domains in .
Change of variables
The limits of integration are often not easily interchangeable (without normality or with complex formulae to integrate). One makes a change of variables to rewrite the integral in a more "comfortable" region, which can be described in simpler formulae. To do so, the function must be adapted to the new coordinates.
Example 1a. The function is ; if one adopts the substitution , therefore , one obtains the new function .
Similarly for the domain because it is delimited by the original variables that were transformed before ( and in example)
The differentials and transform via the absolute value of the determinant of the Jacobian matrix containing the partial derivatives of the transformations regarding the new variable (consider, as an example, the differential transformation in polar coordinates)
There exist three main "kinds" of changes of variable (one in , two in ); however, more general substitutions can be made using the same principle.
Polar coordinates
In if the domain has a circular symmetry and the function has some particular characteristics one can apply the transformation to polar coordinates (see the example in the picture) which means that the generic points in Cartesian coordinates switch to their respective points in polar coordinates. That allows one to change the shape of the domain and simplify the operations.
The fundamental relation to make the transformation is the following:
.
Example 2a. The function is and applying the transformation one obtains
.
Example 2b. The function is , in this case one has:
using the Pythagorean trigonometric identity (can be useful to simplify this operation).
The transformation of the domain is made by defining the radius' crown length and the amplitude of the described angle to define the intervals starting from .
Example 2c. The domain is , that is a circumference of radius 2; it's evident that the covered angle is the circle angle, so varies from 0 to 2, while the crown radius varies from 0 to 2 (the crown with the inside radius null is just a circle).
Example 2d. The domain is , that is the circular crown in the positive half-plane (please see the picture in the example); describes a plane angle while varies from 2 to 3. Therefore the transformed domain will be the following rectangle:
.
The Jacobian determinant of that transformation is the following:
,
which has been obtained by inserting the partial derivatives of , in the first column respect to and in the second respect to , so the differentials in this transformation become .
Once the function is transformed and the domain evaluated, it is possible to define the formula for the change of variables in polar coordinates:
.
is valid in the interval while , which is a measure of a length, can only have positive values.
Example 2e. The function is and the domain is the same as in Example 2d. From the previous analysis of we know the intervals of (from 2 to 3) and of (from 0 to ). Now we change the function:
.
Finally let's apply the integration formula:
.
Once the intervals are known, you have
.
Cylindrical coordinates
In the integration on domains with a circular base can be made by the passage to cylindrical coordinates; the transformation of the function is made by the following relation:
The domain transformation can be graphically attained, because only the shape of the base varies, while the height follows the shape of the starting region.
Example 3a. The region is (that is the "tube" whose base is the circular crown of Example 2d and whose height is 5); if the transformation is applied, this region is obtained:
(that is, the parallelepiped whose base is similar to the rectangle in Example 2d and whose height is 5).
Because the component is unvaried during the transformation, the differentials vary as in the passage to polar coordinates: therefore, they become .
Finally, it is possible to apply the final formula to cylindrical coordinates:
.
This method is convenient in case of cylindrical or conical domains or in regions where it is easy to individuate the z interval and even transform the circular base and the function.
Example 3b. The function is and as integration domain this cylinder: . The transformation of in cylindrical coordinates is the following:
.
while the function becomes
.
Finally one can apply the integration formula:
;
developing the formula you have
.
Spherical coordinates
In some domains have a spherical symmetry, so it's possible to specify the coordinates of every point of the integration region by two angles and one distance. It's possible to use therefore the passage to spherical coordinates; the function is transformed by this relation:
.
Points on the -axis do not have a precise characterization in spherical coordinates, so can vary between 0 and 2.
The better integration domain for this passage is the sphere.
Example 4a. The domain is (sphere with radius 4 and center at the origin); applying the transformation you get the region
.
The Jacobian determinant of this transformation is the following:
.
The differentials therefore are transformed to .
This yields the final integration formula:
.
It is better to use this method in case of spherical domains and in case of functions that can be easily simplified by the first fundamental relation of trigonometry extended to (see Example 4b); in other cases it can be better to use cylindrical coordinates (see Example 4c).
.
The extra and come from the Jacobian.
In the following examples the roles of and have been reversed.
Example 4b. is the same region as in Example 4a and is the function to integrate. Its transformation is very easy:
,
while we know the intervals of the transformed region from :
.
We therefore apply the integration formula:
,
and, developing, we get
.
Example 4c. The domain is the ball with center at the origin and radius ,
,
and is the function to integrate.
Looking at the domain, it seems convenient to adopt the passage to spherical coordinates, in fact, the intervals of the variables that delimit the new region are:
.
However, applying the transformation, we get
.
Applying the formula for integration we obtain:
,
which can be solved by turning it into an iterated integral..
,
,
.
Collecting all parts,
.
Alternatively, this problem can be solved by using the passage to cylindrical coordinates. The new intervals are
;
the interval has been obtained by dividing the ball into two hemispheres simply by solving the inequality from the formula of (and then directly transforming into ). The new function is simply . Applying the integration formula
.
Then we get:
Thanks to the passage to cylindrical coordinates it was possible to reduce the triple integral to an easier one-variable integral.
| Mathematics | Calculus and analysis | null |
1984110 | https://en.wikipedia.org/wiki/Gigantothermy | Gigantothermy | Gigantothermy (sometimes called ectothermic homeothermy or inertial homeothermy) is a phenomenon with significance in biology and paleontology, whereby large, bulky ectothermic animals are more easily able to maintain a constant, relatively high body temperature than smaller animals by virtue of their smaller surface-area-to-volume ratio. A bigger animal has proportionately less of its body close to the outside environment than a smaller animal of otherwise similar shape, and so it gains heat from, or loses heat to, the environment much more slowly.
The phenomenon is important in the biology of ectothermic megafauna, such as large turtles, and aquatic reptiles like ichthyosaurs and mosasaurs. Gigantotherms, though almost always ectothermic, generally have a body temperature similar to that of endotherms. It has been suggested that the larger dinosaurs would have been gigantothermic, rendering them virtually homeothermic.
Disadvantages
Gigantothermy allows animals to maintain body temperature, but is most likely detrimental to endurance and muscle power as compared with endotherms due to decreased anaerobic efficiency. Mammals' bodies have roughly four times as much surface area occupied by mitochondria as reptiles, necessitating larger energy demands, and consequently producing more heat to use in thermoregulation. An ectotherm the same size of an endotherm would not be able to remain as active as the endotherm, as heat is modulated behaviorally rather than biochemically. More time is dedicated to basking than eating.
Advantages
Large ectotherms displaying the same body size as large endotherms have the advantage of a slow metabolic rate, meaning that it takes reptiles longer to digest their food. Consequently gigantothermic ectotherms would not have to eat as often as large endotherms that need to maintain a constant influx of food to meet energy demands. Although lions are much smaller than crocodiles, the lions must eat more often than crocodiles because of the higher metabolic output necessary to maintain the lion's heat and energy. The crocodile needs only to lie in the sun to digest more quickly and synthesize ATP.
| Biology and health sciences | Basics | Biology |
1984367 | https://en.wikipedia.org/wiki/Glacial%20lake | Glacial lake | A glacial lake is a body of water with origins from glacier activity. They are formed when a glacier erodes the land and then melts, filling the depression created by the glacier.
Formation
Near the end of the last glacial period, roughly 10,000 years ago, glaciers began to retreat. A retreating glacier often left behind large deposits of ice in hollows between drumlins or hills. As the ice age ended, these melted to create lakes. These lakes are often surrounded by drumlins, along with other evidence of the glacier such as moraines, eskers and erosional features such as striations and chatter marks.
These lakes are clearly visible in aerial photos of landforms in regions that were glaciated during the last ice age.
The formation and characteristics of glacial lakes vary between location and can be classified into glacial erosion lake, ice-blocked lake, moraine-dammed lake, other glacial lake, supraglacial lake, and subglacial lake.
Glacial lakes and changing climate
Since the glaciation of the Little Ice Age, Earth has lost more than 50% of its glaciers. This along with the current increase in retreating glaciers caused by climate change has created a shift from frozen to liquid water, increasing the extent and volume of glacial lakes around the world. Most glacial lakes present today can be found in Asia, Europe, and North America. The area which will see the greatest increase in lake formation is the Southern Tibetan Plateau region from debris covered glaciers. This increase in glacial lake formation also indicates an increase in occurrence of glacial lake outburst flood events caused by damming and subsequent breaking of moraine and ice.
Sediments
The amount of sediment found in glacial lakes varies, and has a general stratigraphic sequence of organic muds, glacial clays, silty clays, and sands based on time of formation.
Over time the glacial lake sediments are subjected to change. As seen in the English Lake District, the layers of the sediments at the bottom of the lakes contain evidence of the rate of erosion. The elemental make up of the sediments are not associated with the lakes themselves, but by the migration of the elements within the soil, such as iron and manganese.
The distribution of these elements, within the lake bed, are attributed to the condition of the drainage basin and the chemical composition of the water.
Sediment deposition can also be influenced by animal activity; including the distribution of biochemical elements, which are elements that are found in organic organisms, such as phosphorus and sulfur.
The amount of halogens and boron found in the sediments accompanies a change in erosional activity. The rate of deposition reflects the amount of halogen and boron in the deposited sediments.
The scouring action of the glaciers pulverizes minerals in the rock over which the glacier passes. These pulverized minerals become sediment at the bottom of the lake, and some of the rock flour becomes suspended in the water column. These suspended minerals support a large population of algae, making the water appear green.
Glacial lake sediments also archive changes in geochemistry and pollen records as a result of climate change and human activities. During the transition from the Last Glacial Period to the Holocene climatic optimum, soil development was enhanced, whereas early human activities such as deforestation have resulted in elevated soil erosion. These events can be reflected in geochemistry and isotope signatures in the lake sediments.
Biotic ecosystem
Biodiversity and productivity tend to be lower in glacial lakes as only cold-tolerant and cold-adapted species can withstand their harsh conditions. Glacial rock flour and low nutrient levels create an oligotrophic environment where few species of plankton, fish and benthic organisms reside.
Before becoming a lake the first stages of glacial recession melt enough freshwater to form a shallow lagoon. In the case of Iceland's Jökulsárlón glacial lagoon located on the edge of the Atlantic Ocean, tides bring in an array of fish species to the edge of the glacier. These fish attract an abundance of predators from birds to marine mammals, that are searching for food. These predators include fauna such as, seals, arctic terns and arctic skua.
Glacial lakes that have been formed for a long period of time have a more diverse ecosystem of fauna originating form neighboring tributaries or other glacial refugia. For example, many native species of the great lakes basin entered via the Mississippi basin refugia within the past 14,000 years.
Societal perspectives
Glacial lakes act as fresh water storage for the replenishing of a region's water supply and serve as potential electricity producers from hydropower.
Glacial lakes' aesthetic nature can also stimulate economic activity through the attraction of the tourism industry. Thousands of tourists visit the Jökulsárlón glacial lagoon in Iceland annually to take part in commercial boat tours and every two to four years thousands visit the Argentino glacial lake in Argentina to witness the collapse of the cyclically formed arch of ice from the Perito Moreno glacier, making it one of the largest travel destinations in Patagonia.
Gallery
| Physical sciences | Hydrology | Earth science |
1984989 | https://en.wikipedia.org/wiki/Minibike | Minibike | A minibike is a two-wheeled, motorized, off-highway recreational vehicle popularized in the 1960s and 1970s, but available continuously from a wide variety of manufacturers since 1959. Their off-highway nature and (in many countries) typically entirely off-road legal status differentiate minibikes from motorcycles and mopeds, and their miniature size differentiates them from dirt bikes.
Traditionally, minibikes have a four-stroke, horizontal crankshaft engine, single- or two-speed centrifugal clutch transmissions with chain final-drive, 4" or 6" wheels and a low frame/seat height with elevated handlebars. Commercially available minibikes are usually equipped with small engines commonly found elsewhere on utilitarian equipment such as garden tillers.
History
While the minibike had precursors in machines such as the Doodle Bug and Cushman Scooters, which share smaller wheels, tubular-steel frames, and air-cooled, single-cylinder engines, those vehicles had larger seat heights and lighting that allow them to be registered for road use as scooters. In the 1950s, minibikes were hand-made by enthusiasts. These were first popularly used as pit bikes, for drag racers to travel in the staging-areas during races. One of these "Pit bikes" was received by brothers Ray, Larry and Regis Michrina in early 1959 from local car dealer and racer Troy Ruttman.
The Michrina Brothers would create the first commercial minibikes by drawing inspiration from this Pit Bike, delivering 3 prototypes to Troy Ruttman to sell through his dealership. The Michrina brothers are credited with creating the minibike but failed to patent the design or trademark the term when founding their Lil Indian brand in 1959. Lil Indian would go on to manufacture tens-of-thousands of minibikes in their 40+ years. From the mid-1960s into the 1970s, the popularity of said machines would see over a hundred manufacturers attempt to market machines, an inexpensive venture due to the absence of patents. So popular and simple was the design, June 1967 Popular Mechanics magazine included an article with plans.
As the market for minibikes developed, a variety of cottage and major industries offered models, including Arctic Cat, Rupp, Taco, Heath, Gilson, and Fox. Traditional motorcycle manufacturers also released models inspired by aspects of minibikes, most famously Honda with the Z50A, though this style was nicknamed a Monkey Bike due to its monkey-like riding position. Sales peaked in 1973, with 140,000 units between manufacturers. By 1976 the bubble had burst and fewer than ten manufacturers continued to make minibikes. Popularity declined steadily, but leveled off in the early 1990s. Currently, machines can still be found at various retailers for less than $800.
The wide availability of cheap, generic components manufactured in China has given rise to the popularity of home-assembled minibikes. These bikes typically have simple, boxy tube frames, small wheels, and are often built with some parts repurposed from Go-Karts, dirt bikes, or gas-powered tools. Bikes built this way can range from underpowered machines running on lawnmower motors up to extremely powerful ones capable of speeds up to 100 miles per hour. Despite not being road legal, recreational riding of these bikes, especially in large groups, has become popular in many cities in Southern California.
Recently there has been a trend of adult sized electric minibikes.
Legal status
In some jurisdictions, it is not legal to operate minibikes in certain places or without regulatory-specified special equipment.
Canada
Minibikes can be classified as a competition vehicle if it is imported to Canada or restricted-use motorcycle that must have a Vehicle Identification Number. Models for younger children are marked as ride on toys for they do not meet Transportation in Canada safety requirements. When caught ride a minibike on public roads you will be charged under the Highway Traffic Act (HTA) and the Compulsory Automobile Insurance Act (CAIA).
UK
It is not legal for Minibikes to be used on public roads or land. Further, it is not legal to use Minibikes on a property in proximity to a population if cited for noise pollution.
US
Whilst laws vary by state, Minibikes became unlawful for use on public through-ways due to lack of safety equipment, lights, and their diminutive size causing visibility issues. In 1977, the CPSC was unsuccessfully lobbied to add federal regulation to Minibikes. By 1979 in the US, Minibikes could not be operated on public roads, they could still operate in areas legal for use of other recreational vehicles, provided they had a specified set of proper equipment utilized at the time of sale, most notably a spark arrestor for the exhaust. In many US states mini bikes can be made street legal.
| Technology | Motorized road transport | null |
1985133 | https://en.wikipedia.org/wiki/Bupivacaine | Bupivacaine | Bupivacaine, marketed under the brand name Marcaine among others, is a medication used to decrease sensation in a specific small area. In nerve blocks, it is injected around a nerve that supplies the area, or into the spinal canal's epidural space. It is available mixed with a small amount of epinephrine to increase the duration of its action. It typically begins working within 15 minutes and lasts for 2 to 8 hours.
Possible side effects include sleepiness, muscle twitching, ringing in the ears, changes in vision, low blood pressure, and an irregular heart rate. Concerns exist that injecting it into a joint can cause problems with the cartilage. Concentrated bupivacaine is not recommended for epidural freezing. Epidural freezing may also increase the length of labor. It is a local anaesthetic of the amide group.
Bupivacaine was discovered in 1957. It is on the World Health Organization's List of Essential Medicines. Bupivacaine is available as a generic medication. An implantable formulation of bupivacaine (Xaracoll) was approved for medical use in the United States in August 2020.
Medical uses
Bupivacaine is indicated for local infiltration, peripheral nerve block, sympathetic nerve block, and epidural and caudal blocks. It is sometimes used in combination with epinephrine to prevent systemic absorption and extend the duration of action. The 0.75% (most concentrated) formulation is used in retrobulbar block. It is the most commonly used local anesthetic in epidural anesthesia during labor, as well as in postoperative pain management. Liposomal formulations of bupivacaine (brand name EXPAREL) have not shown clinical benefit compared to plain bupivacaine when used in traditional perineural injections, although some industry-funded studies have suggested benefits when used in local infiltration.
The fixed-dose combination of bupivacaine with Type I collagen (brand name Xaracoll) is indicated for acute postsurgical analgesia (pain relief) for up to 24 hours in adults following open inguinal hernia repair.
Bupivacaine (Posimir) is indicated in adults for administration into the subacromial space under direct arthroscopic visualization to produce post-surgical analgesia for up to 72 hours following arthroscopic subacromial decompression.
Contraindications
Bupivacaine is contraindicated in patients with known hypersensitivity reactions to bupivacaine or amino-amide anesthetics. It is also contraindicated in obstetrical paracervical blocks and intravenous regional anaesthesia (Bier block) because of potential risk of tourniquet failure and systemic absorption of the drug and subsequent cardiac arrest. The 0.75% formulation is contraindicated in epidural anesthesia during labor because of the association with refractory cardiac arrest.
Adverse effects
Compared to other local anaesthetics, bupivacaine is markedly cardiotoxic. However, adverse drug reactions are rare when it is administered correctly. Most reactions are caused by accelerated absorption from the injection site, unintentional intravascular injection, or slow metabolic degradation. However, allergic reactions can rarely occur.
Clinically significant adverse events result from systemic absorption of bupivacaine and primarily involve the central nervous and cardiovascular systems. Effects on the central nervous system typically occur at lower blood plasma concentrations. Initially, cortical inhibitory pathways are selectively inhibited, causing symptoms of neuronal excitation. At higher plasma concentrations, both inhibitory and excitatory pathways are inhibited, causing central nervous system depression and potentially coma. Higher plasma concentrations also lead to cardiovascular effects, though cardiovascular collapse may also occur with low concentrations. Adverse effects on the central nervous system may indicate impending cardiotoxicity and should be carefully monitored.
Central nervous system: circumoral numbness, facial tingling, vertigo, tinnitus, restlessness, anxiety, dizziness, seizure, coma
Cardiovascular: hypotension, arrhythmia, bradycardia, heart block, cardiac arrest
Toxicity can also occur in the setting of subarachnoid injection during high spinal anesthesia. These effects include: paresthesia, paralysis, apnea, hypoventilation, fecal incontinence, and urinary incontinence. Additionally, bupivacaine can cause chondrolysis after continuous infusion into a joint space.
Bupivacaine has caused several deaths when the epidural anaesthetic has been administered intravenously accidentally.
Treatment of overdose
Animal evidence indicates intralipid, a commonly available intravenous lipid emulsion, can be effective in treating severe cardiotoxicity secondary to local anaesthetic overdose, and human case reports of successful use in this way. Plans to publicize this treatment more widely have been published.
Pregnancy and lactation
Bupivacaine crosses the placenta and is a pregnancy category C drug. However, it is approved for use at term in obstetrical anesthesia. Bupivacaine is excreted in breast milk. Risks of stopping breast feeding versus stopping bupivacaine should be discussed with the patient.
Postarthroscopic glenohumeral chondrolysis
Bupivacaine is toxic to cartilage and its intra-articular infusions may lead to postarthroscopic glenohumeral chondrolysis.
Pharmacology
Pharmacodynamics
Bupivacaine binds to the intracellular portion of voltage-gated sodium channels and blocks sodium influx into nerve cells, which prevents depolarization. Without depolarization, no initiation or conduction of a pain signal can occur.
Pharmacokinetics
The rate of systemic absorption of bupivacaine and other local anesthetics is dependent upon the dose and concentration of drug administered, the route of administration, the vascularity of the administration site, and the presence or absence of epinephrine in the preparation.
Onset of action (route and dose-dependent): 1–17 min
Duration of action (route and dose-dependent): 2–9 hr
Half life: neonates, 8.1 hr, adults: 2.7 hr
Time to peak plasma concentration (for peripheral, epidural, or caudal block): 30–45 min
Protein binding: about 95%
Metabolism: hepatic
Excretion: renal (6% unchanged)
Chemical structure
Like lidocaine, bupivacaine is an amino-amide anesthetic; the aromatic head and the hydrocarbon chain are linked by an amide bond rather than an ester as in earlier local anesthetics. As a result, the amino-amide anesthetics are more stable and less likely to cause allergic reactions. Unlike lidocaine, the terminal amino portion of bupivacaine (as well as mepivacaine, ropivacaine, and levobupivacaine) is contained within a piperidine ring; these agents are known as pipecholyl xylidines.
Society and culture
Legal status
On 17 September 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Exparel, intended for the treatment of post-operative pain. The applicant for this medicinal product is Pacira Ireland Limited. Exparel liposomal was approved for medical use in the European Union in November 2020.
Economics
Bupivacaine is available as a generic medication.
Research
Levobupivacaine is the (S)-(–)-enantiomer of bupivacaine, with a longer duration of action, producing less vasodilation. Durect Corporation is developing a biodegradable, controlled-release drug delivery system for after surgery. As of 2010, it has completed a phase-III clinical trial.
| Biology and health sciences | Anesthetics | Health |
1987761 | https://en.wikipedia.org/wiki/Leedsichthys | Leedsichthys | Leedsichthys is an extinct genus of pachycormid fish that lived in the oceans of the Middle to Late Jurassic. It is the largest ray-finned fish, and amongst the largest fish known to have ever existed.
The first remains of Leedsichthys were identified in the nineteenth century. Especially important were the finds by the British collector Alfred Nicholson Leeds, after whom the genus was named "Leeds' fish" in 1889. The type species is Leedsichthys problematicus. Leedsichthys fossils have been found in England, France, Germany and Chile. In 1999, based on the Chilean discoveries, a second species was named Leedsichthys notocetes, but this was later shown to be indistinguishable from L. problematicus.
Leedsichthys fossils have been difficult to interpret because the skeletons were not completely made of bone. Large parts consisted of cartilage that did not fossilize. On several occasions the enigmatic large partial remains have been mistaken for stegosaurian dinosaur bones. As the vertebrae are among the parts that have not been preserved, it is hard to determine the total body length. Estimates have varied significantly. At the beginning of the twentieth century, a length of was seen as plausible, but by its end Leedsichthys was sometimes claimed to have been over long. Recent research has lowered this to about for the largest individuals. Skull bones have been found indicating that Leedsichthys had a large head with bosses on the skull roof. Fossilised bony fin rays show large elongated pectoral fins and a tall vertical tail fin. The gill arches were lined by gill rakers, equipped by a unique system of delicate bone plates, that filtered plankton from the sea water, the main food source.
Along with its close pachycormid relatives Bonnerichthys and Rhinconichthys, Leedsichthys is part of a lineage of large-sized filter-feeders that swam the Mesozoic seas for over 100 million years, from the middle Jurassic until the end of the Cretaceous period. Pachycormids might represent an early branch of Teleostei, the group most modern bony fishes belong to; in that case Leedsichthys is the largest known teleost fish.
Discovery and naming
During the 1880s, the gentleman farmer Alfred Nicholson Leeds collected large fish fossils from loam pits near Peterborough, England. In May 1886 these were inspected by John Whitaker Hulke, who in 1887 partially reported them as the back plates of the stegosaurian Omosaurus. On 22 August 1888, the American dinosaur expert Professor Othniel Charles Marsh visited Leeds' farm at Eyebury and quickly concluded that the presumed dinosaurian armour in fact represented the skull bones of a giant fish. Within two weeks British fish expert Arthur Smith Woodward examined the specimens and began to prepare a formal description published in 1889. In it he named the species Leedsichthys problematicus. The generic name Leedsichthys means "Leeds' fish", from Greek ἰχθύς, ichthys, "fish". The fossils found by Leeds gave the fish the specific epithet problematicus because the remains were so fragmented that they were extremely hard to recognize and interpret. After a second publication in 1889, objections were raised against the perceived "barbaric" nature of the generic name, which simply attached a non-Latinised British family name to a Classical Greek word. Woodward therefore in 1890 changed the genus name to Leedsia, resulting in a Leedsia problematica. However, by modern standards this is a non-valid junior synonym.
The holotype specimen, BMNH P.6921, had been found in a layer of the Oxford Clay Formation dating from the Callovian, about 165 million years old. It consists of 1133 disarticulated elements of the skeleton, mostly fin ray fragments, probably of a single individual. Another specimen, BMNH P.6922, contains additional probable fragmentary remains of Leedsichthys. Woodward also identified a specimen previously acquired from the French collector Tesson, who had in 1857 found them in the Falaises des Vaches Noires of Normandy, BMNH 32581, as the gill rakers of Leedsichthys. Another specimen bought in 1875 from the collection of William Cunnington, BMNH 46355, he failed to recognise.
Leeds continued to collect Leedsichthys fossils that subsequently were acquired by British musea. In March 1898, Leeds reported to have discovered a tail which he on 17 March 1899 sold for £25 to the British Museum of Natural History, which exhibited it as specimen BMNH P.10000; a new inventory number range was begun for the occasion. Already in July 1898, the front of probably the same animal had been bought, BMNH P.11823. On 22 July 1905 specimen BMNH P.10156 was acquired, a gill basket. In January 1915 Leeds sold specimens GLAHM V3362, a pectoral fin, and GLAHM V3363, the remainder of the same skeleton with 904 elements, to the Hunterian Museum of Glasgow.
Leeds had a rival, the collector Henry Keeping , who in 1899 tricked pit workers into selling dorsal fin rays by misinforming them that Leeds had lost interest in such finds. Keeping again sold these to the University of Cambridge where they were catalogued as specimen CAMSM J.46873. In September 1901, they were examined by the German palaeontologist Friedrich von Huene, who identified them as tail spikes, Schwanzstacheln, of Omosaurus, the second time Leedsichthys remains were mistaken for stegosaurian bones; Leeds himself was able to disabuse von Huene the same year.
In 2001, students at the Dogsthorpe Star Pit discovered a major new British specimen that they nicknamed "Ariston" after a 1991 commercial for the Indesit Ariston washing machine that claimed it went "on and on and on" — likewise the bones of Leedsichthys seemed to endlessly continue into the face of the loam pit. From 2002 until 2004 "Ariston" or specimen PETMG F174 was excavated by a team headed by Jeff Liston; to uncover the remains it was necessary to remove ten thousand tonnes of loam forming an overburden of thickness. The find generated considerable media attention, inspiring an episode of the BBC Sea Monsters series, "The Second Most Deadly Sea", and a Channel Four documentary titled The Big Monster Dig, both containing computer-generated animated reconstructions of Leedsichthys. Liston subsequently dedicated a dissertation and a series of articles to Leedsichthys, providing the first extensive modern osteology of the animal.
Apart from the British discoveries, finds of a more fragmentary nature continued to be made in Normandy, France. In July 1982, Germany became an important source of Leedsichthys fossils when two groups of amateur palaeontologists, unaware of each other's activities, began to dig up the same skeleton at Wallücke. Remarkably, parts of it were again incorrectly identified as stegosaurian material, of Lexovisaurus. From 1973 onwards, fragmentary Leedsichthys fossils were uncovered in Chile. In March 1994, a more complete specimen was found, SMNK 2573 PAL. In 1999 the Chilean finds were named as a second species, Leedsichthys notocetes, the "Southern Sea Monster". However, Liston later concluded that the presumed distinguishing traits of this species, depressions on the gill rakers, were artefacts caused by erosion; Leedsichthys notocetes would be a junior synonym of Leedsichthys problematicus.
Fossil range
The fossil remains of Leedsichthys have been found in the Callovian of England and northern Germany, the Oxfordian of Chile, and the Callovian and upper Kimmeridgian of France. These occurrences span a temporal range of at least five million years. A complete and isolated gill raker from the Vaca Muerta formation of Argentina (MOZ-Pv 1788), has been assigned to the genus and dates to the early Tithonian.
Description
Although the remains of over seventy individuals have been found, most of them are partial and fragmentary. The skeleton of Leedsichthys is thus only imperfectly known. This is largely caused by the fact that many skeletal elements, including the front of the skull and the vertebral centra, did not ossify but remained cartilage. Furthermore, those that did ossify were gradually hollowed out during the lifetime of the animal by resorption of the inner bone tissue. In the fossil phase, compression flattened and cracked these hollow structures, making it extraordinarily difficult to identify them or determine their original form.
The head was probably relatively large and wide but still elongated. The snout is completely unknown. Frontal bones are absent. The skull roof is rather robust with bosses on the parietals, continuing sideways over the dermopterotica, and the postparietals. The parietals have a notch on the front midline. A dermosphenoticum is present above the eye socket. The jaws are toothless. Behind the jaw joint a robust hyomandibula is present. The gill basket rests on paired hypohyalia. At least the first two gill arches have ossified hypobranchialia, the lower parts of the gill arch; a third hypobranchiale was likely present. The hypobranchials are attached at their lower ends at an angle of 21,5° via a functional joint that possibly served to increase the gape of the mouth, to about two feet. All five gill arches have ossified ceratobranchialia with a triangular cross-section, the middle sections of the arches. The hypobranchials are fused with their ceratobranchials. The fifth gill arch is fused with the front parts of the basket. Higher epibranchialia and pharyngobranchialia are present but poorly known. The fourth arches are supported by a midline fourth basibranchiale. An ossified operculum is present.
The gill arches are equipped with rows of parallel 3-to-12-centimetre-long (1.2-to-4.7-inch-long) gill rakers, in life probably attached to the ceratobranchials via soft tissue. On the top of each raker one or two rows of dozens of low "teeth" are present. When there are two rows, they are placed on the edges of the upper surface and separated by a deep trough, itself separated from an internal hollow space by a transverse septum. The teeth or "fimbriations" are obliquely directed towards the front and the top. They are grooved at their sides, the striations continuing over the sides of the raker. Detailed study of exquisitely preserved French specimens revealed to Liston that these teeth were, again via soft tissue, each attached to delicate 2-millimetre-long (0.08-inch-long) bony plates, structures that had never before been observed among living or extinct fishes. An earlier hypothesis that the striations would function as sockets for sharp "needle teeth", as with the basking shark, was hereby refuted. The rakers served to filter plankton, the main food supply of Leedsichthys, from the sea water.
Large parts of the Leedsichthys fossils consist of bony finrays. Leedsichthys has two pectoral fins that probably were located rather low on the body. They are large, very elongated — about five times longer than wide — and scythe-like, with a sudden kink at the lower end, curving 10° to the rear. Also a dorsal fin is present, although its position is unknown. Pelvic fins at the belly are lacking; also a pelvic plate is absent. However, there are indications for a small triangular anal fin. The vertical tail fin is very large and symmetrical with paired upper and lower lobes; there is a smaller lobe in the middle protruding between them. The rays are unsegmented lepidotrichia, resulting in a rather stiff structure. They are bifurcated at up to three splitting points along their length, so a proximally single ray may have eight distal ends. A row of bony supraneuralia is present behind the head, at each side of the vertebral column. Uroneuralia at the tail are unknown. No bony scales are present.
Size
Leedsichthys is the largest known member of the Osteichthyes or bony fishes. The largest extant non-tetrapodomorph bony fish is the ocean sunfish, Mola mola, being with a weight of up to two tonnes an order of magnitude smaller than Leedsichthys. The extant giant oarfish might rival Leedsichthys in length but is much thinner. The lack of a preserved vertebral column has made it difficult to estimate the exact length of Leedsichthys. Arthur Smith Woodward, who described the type specimen in 1889, estimated specimen BMNH P.10000 to be of an around nine metre long individual, by comparing this tail of Leedsichthys, having a preserved height of , with another pachycormid, Hypsocormus. The length of Leedsichthys was not historically the subject of much attention, the only reference to it being made by Woodward himself when he in 1937 indicated it again as on the museum label of BMNH P.10000. However, in 1986, David Martill compared the bones of Leedsichthys to a pachycormid that he had recently discovered, Asthenocormus. The unusual proportions of that specimen gave a wide range of possible sizes. Some were as low as , but extrapolating from the gill basket resulted in an estimated length of for Leedsichthys specimen NHM P.10156 (the earlier BMNH P.10156). Martill considered the higher estimate as a plausible size of the largest individuals. Subsequently, a length of thirty metres (hundred feet) was often mentioned in popular science publications, sometimes one as high as .
Liston in his studies concluded to much lower estimates. Documentation of historical finds and the excavation of "Ariston", the most complete specimen ever from the Star Pit near Whittlesey, Peterborough, support Woodward's figures of between . With "Ariston" the pectoral fins are apart, indicating a narrow body of no excessive size, even though it was initially thought to have been long. In 2007 Liston stated that most specimens indicated lengths between . A linear extrapolation from the gill basket would be flawed because the gills grow disproportionally in size, having to increase their surface allometrically to ensure the oxygen supply of a body increasing in volume to the third power. The growth ring structures within the remains of Leedsichthys have indicated that it would have taken 21 to 25 years to reach these lengths, and isolated elements from other specimens showed that a maximum size of just over is not unreasonable.
In 2013, Liston and colleagues estimated that the age of the five specimens (PETMG F174, NHMUK PV P10000, GLAHM V3363, NHMUK PV P6921 and NHMUK PV P10156) would have ranged between 19 and 40 years old. The largest specimen, NHMUK PV P10156, on the basis of its gill basket with a preserved width of and height of , would have been 38 years old (2 years younger than the holotype NHMUK PV P6921) and measured long. In 2018, Ferron and colleagues estimated that this specimen would have weighed .
Phylogeny
Woodward initially assigned Leedsichthys to the Acipenseroidea, considering it related to the sturgeon, having the large gill rakers and branching finrays in common. In 1905, he changed this to the Pachycormidae. The Pachycormidae have a somewhat uncertain position. Often they are considered very basal Teleostei — if so, Leedsichthys would be the largest known teleost — others see them as members of a Pachycormiformes forming the sister group of the Teleostei, and sometimes they are seen as even more basal Amiiformes. In the latter case the extant bowfin, Amia calva, would be the closest living relative of Leedsichthys.
Within the Pachycormidae, a cladistic analysis found Leedsichthys to be the sister species of Asthenocormus, their clade being the sister group of Martillichthys.
This cladogram after Friedman et al. shows a possible position of Leedsichthys in the evolutionary tree.
Paleobiology
Like the largest fish today, the whale sharks and basking sharks, Leedsichthys problematicus derived its nutrition as a suspension feeder, using an array of specialised gill rakers lining its gill basket to extract zooplankton, small animals, from the water passing through its mouth and across its gills. It is less clear whether also phytoplankton, algae, were part of the diet. Leedsichthys could have been a ram feeder, making the water pass through its gills by swimming, but could also have actively pumped the water through the gill basket. In 2010, Liston suggested that fossilised furrows discovered in ancient sea floors in Switzerland and attributed to the activity of plesiosaurs, had in fact been made by Leedsichthys spouting water through its mouth to disturb and eat the benthos, the animals dwelling in the sea floor mud.
Much is still uncertain about the life cycle of Leedsichthys. Liston's 2013 study suggested a slow, nearly linear, growth. A French study in 1993 of its bone structure concluded however, that the metabolism was rather high. Also problematic is how Leedsichthys could increase its size quickly during the first year of its life. Teleostei typically lay relatively small eggs and this has been seen as an obstacle for them attaining giant sizes.
In 1986, Martill reported the presence of a tooth of the marine crocodile Metriorhynchus in a bone of Leedsichthys. The bone would have healed, a sign that the about 3-metre-long (10-foot-long) Metriorhynchus was actively hunting the much larger fish. However, in 2007 Liston concluded the bone tissue had not in fact healed and that this was probably a case of scavenging. A 2.5 m-long specimen FBS 2012.4.67.80, assigned to Metriorhynchus cf. superciliosus, was found with the gill apparatus of Leedsichthys and remains of invertebrates inside its stomach. Such content indicates that the diet of metriorhynchids was varied, and this individual most likely ate already dead fish. An apex predator of the Oxford Clay seas large enough to attack Leedsichthys was the pliosaurid Liopleurodon.
In 1999 Martill suggested that a climate change at the end of the Callovian led to the extinction of Leedsichthys in the northern seas, the southern Ocean offering a last refuge during the Oxfordian. However, in 2010 Liston pointed out that Leedsichthys during the later Kimmeridgian was still present in the north, as testified by Normandian finds. Liston did nevertheless consider in 2007 that the lack of any vertebrate suspension feeders as large as prior to the Callovian stage of the Mesozoicum might indicate that the Callovian had seen a marked change in productivity as regarded zooplankton populations. Indeed, further studies supported this, viewing Leedsichthys as the beginning of a long line of large (> in length) pachycormid suspension feeders that continued to flourish well into the Late Cretaceous, such as Bonnerichthys and Rhinconichthys, and emphasising the convergent evolutionary paths taken by pachycormids and baleen whales.
Recent studies have uncovered some estimations regarding metabolic rate and speed for Leedsichthys. Using data from living teleost fish as a comparison, scientists discovered that Leedsichthys could have cruised along at potential speeds of while still maintaining oxygenation of its body tissues.
| Biology and health sciences | Prehistoric osteichthyans | Animals |
24188286 | https://en.wikipedia.org/wiki/Clione%20limacina | Clione limacina | Clione limacina, known as the naked sea butterfly, sea angel, and common clione, is a sea angel (pelagic sea slug) found from the surface to greater than depth. It lives in the Arctic Ocean and cold regions of the North Atlantic Ocean. It was first described by Friderich Martens in 1676 and became the first gymnosomatous (without a shell) "pteropod" to be described.
Subspecies
Clione limacina australis (Bruguière, 1792)
Clione limacina limacina (Phipps, 1774)
Distribution
Clione limacina is found in cold waters of the Arctic Ocean and North Atlantic Ocean, ranging south at least to the Sargasso Sea. There are three other species in the genus, which formerly were included in C. limacina (either as subspecies, variants or subpopulations). These are C. elegantissima of the cold North Pacific (at least north to the Gulf of Alaska; the Beaufort Sea is inhabited by C. limacina), C. okhotensis of the Okhotsk Sea (where it overlaps with C. elegantissima), and C. antarctica of Antarctic waters.
Description
There are two subspecies that differentiate in body length. The northern subspecies lives in colder water, matures at and can reach a size of . This makes it by far the largest sea angel. In comparison, the size of the southern subspecies is , C. elegantissima is up to , C. okhotensis up to , and C. antarctica up to .
C. limacina swims by beating its two wings to move upwards or maintain itself at a constant depth. To keep itself upright during swimming, it uses two statocyst gravity-sensing organs that correct it to an upright posture using its tail.
Ecology
Clione limacina inhabits both the epipelagic and mesopelagic regions of the water column.
Feeding habits
Adults feed in a predator-prey relationship almost exclusively on the sea butterflies of the genus Limacina: on Limacina helicina and on Limacina retroversa. The feeding process of Clione limacina is somewhat extraordinary. The buccal ("mouth") apparatus consists of three pairs of buccal cones. These tentacles grab the shell of Limacina helicina. When the prey is in the right position, with its shell opening facing the radula of Clione limacina, it then grasps the prey with its chitinous hooks, everted from hook sacs. Then it extracts the body completely out of its shell and swallows it whole.
Adult Limacina are absent for much of the year, leaving C. limacina without access to their main food source. A study of 138 C. limacina during a period without adult Limacina found that the stomachs of 24 contained remains of amphipods and 3 contained remains of calanoids. This temporary prey change may allow them to survive in periods of starvation, although the species can survive for one year without food. Under such exceptional starvation in the laboratory the length of slugs have decreased on average from .
The earliest larvae stages of C. limacina feed on phytoplankton, but from the later laval stage this changes to Limacina. The development of these two species is parallel and small C. limacina feed on Limacina of a size, while large C. limacina avoid small Limacina (including its larvae).
Life cycle
In Svalbard, the life cycle of C. limacina appears to be at least 2 years. It is a hermaphrodite and observations suggest this is simultaneous. It breeds during the spring and summer, and the eggs are about .
Clione limacina is a prey of planktonic feeders, such as the baleen whales, which historically led to sailors naming it "whale-food". Some fishes are also its predators. For example, the Chum Salmon, Oncorhynchus keta, is a major predator of sea angels.
Cultural significance
While sea angels are relatively obscure in Western countries, they are extremely well-known in Japanese culture. As a result, two creatures from the Pokémon franchise, Manaphy and Phione, are based on the clione. However, Californian band glass beach seems to use a drawing of a Clione in the album cover for Plastic Death. Additionally, the game Dave The Diver has cliones as both regular creatures and a boss.
| Biology and health sciences | Gastropods | Animals |
18933037 | https://en.wikipedia.org/wiki/Boeing%20B-52%20Stratofortress | Boeing B-52 Stratofortress | The Boeing B-52 Stratofortress is an American long-range, subsonic, jet-powered strategic bomber. The B-52 was designed and built by Boeing, which has continued to provide support and upgrades. It has been operated by the United States Air Force (USAF) since the 1950s, and by NASA for nearly 50 years. The bomber can carry up to of weapons and has a typical combat range of around without aerial refueling.
Beginning with the successful contract bid in June 1946, the B-52 design evolved from a straight wing aircraft powered by six turboprop engines to the final prototype YB-52 with eight turbojet engines and swept wings. The B-52 took its maiden flight in April 1952. The B-52 has been in service with the USAF since 1955, and NASA from 1959 to 2007. Built to carry nuclear weapons for Cold Warera deterrence missions, the B-52 Stratofortress replaced the Convair B-36 Peacemaker.
Superior performance at high subsonic speeds and relatively low operating costs have kept them in service despite the development of more advanced strategic bombers, such as the Mach 2+ Convair B-58 Hustler, the canceled Mach 3 North American XB-70 Valkyrie, the variable-geometry Rockwell B-1 Lancer, and the stealth Northrop Grumman B-2 Spirit. A veteran of several wars, the B-52 has dropped only conventional munitions in combat.
The B-52's official name Stratofortress is rarely used; informally, the aircraft has become commonly referred to as the BUFF (Big Ugly Fat Fucker/Fella). There are 76 aircraft in inventory ; 58 operated by active forces (2nd Bomb Wing and 5th Bomb Wing), 18 by reserve forces (307th Bomb Wing), and about 12 in long-term storage at the Davis-Monthan AFB Boneyard. The bombers flew under the Strategic Air Command (SAC) until it was disestablished in 1992 and its aircraft absorbed into the Air Combat Command (ACC); in 2010, all B-52 Stratofortresses were transferred from the ACC to the new Air Force Global Strike Command (AFGSC). The B-52 completed 60 years of continuous service with its original operator in 2015. After being upgraded between 2013 and 2015, the last airplanes are expected to serve into the 2050s.
Development
Origins
On 23 November 1945, Air Materiel Command (AMC) issued desired performance characteristics for a new strategic bomber "capable of carrying out the strategic mission without dependence upon advanced and intermediate bases controlled by other countries". The aircraft was to have a crew of five or more turret gunners, and a six-man relief crew. It was required to cruise at at with a combat radius of . The armament was to consist of an unspecified number of 20 mm cannons and of bombs. On 13 February 1946, the USAAF issued bid invitations for these specifications, with Boeing, Consolidated Aircraft, and Glenn L. Martin Company submitting proposals.
On 5 June 1946, Boeing's Model 462, a straight-wing aircraft powered by six Wright T35 turboprops with a gross weight of and a combat radius of , was declared the winner. On 28 June 1946, Boeing was issued a letter of contract for million to build a full-scale mockup of the new XB-52 and do preliminary engineering and testing. However, by October 1946, the USAAF began to express concern about the sheer size of the new aircraft and its inability to meet the specified design requirements. In response, Boeing produced the Model 464, a smaller four-engine version with a gross weight, which was briefly deemed acceptable.
Subsequently, in November 1946, the Deputy Chief of Air Staff for Research and Development, General Curtis LeMay, expressed the desire for a cruising speed of , to which Boeing responded with a aircraft. In December 1946, Boeing was asked to change their design to a four-engine bomber with a top speed of , range of , and the ability to carry a nuclear weapon; in total, the aircraft could weigh up to . Boeing responded with two models powered by T35 turboprops. The Model 464-16 was a "nuclear only" bomber with a payload, while the Model 464-17 was a general purpose bomber with a payload. Due to the cost associated with purchasing two specialized aircraft, the USAAF selected Model 464–17 with the understanding that it could be adapted for nuclear strikes.
In June 1947, the military requirements were updated and the Model 464-17 met all of them except for the range. It was becoming obvious to the USAAF that, even with the updated performance, the XB-52 would be obsolete by the time it entered production and would offer little improvement over the Convair B-36 Peacemaker; as a result, the entire project was postponed for six months. During this time, Boeing continued to perfect the design, which resulted in the Model 464–29 with a top speed of and a range. In September 1947, the Heavy Bombardment Committee was convened to ascertain performance requirements for a nuclear bomber. Formalized on 8 December 1947, these requirements called for a top speed of and an range, far beyond the capabilities of the 464-29.
The outright cancellation of the Boeing contract on 11 December 1947 was staved off by a plea from its president William McPherson Allen to the Secretary of the Air Force Stuart Symington. Allen reasoned that the design was capable of being adapted to new aviation technology and more stringent requirements. In January 1948, Boeing was instructed to thoroughly explore recent technological innovations, including aerial refueling and the flying wing. Noting stability and control problems Northrop Corporation was experiencing with their YB-35 and YB-49 flying wing bombers, Boeing insisted on a conventional aircraft, and in April 1948 presented a million (US$ today) proposal for design, construction, and testing of two Model 464-35 prototypes. Further revisions during 1948 resulted in an aircraft with a top speed of at , a range of , and a gross weight, which included of bombs and of fuel.
Design effort
In May 1948, Air Materiel Command asked Boeing to incorporate the previously discarded jet engine, with improvements in fuel efficiency, into the design. That resulted in the development of yet another revisionin July 1948, Model 464-40 substituted Westinghouse J40 turbojets for the turboprops. The USAF project officer who reviewed the Model 464-40 was favorably impressed, especially since he had already been thinking along similar lines. Nevertheless, the government was concerned about the high fuel consumption rate of the jet engines of the day, and directed Boeing to use the turboprop-powered Model 464–35 as the basis for the XB-52. Although he agreed that turbojet propulsion was the future, General Howard A. Craig, Deputy Chief of Staff for Materiel, was not very enthusiastic about a jet-powered B-52 since he felt that the jet engine had not yet progressed sufficiently to permit skipping an intermediate turboprop stage. However, Boeing was encouraged to continue turbojet studies even without any expected commitment to jet propulsion.
On Thursday, 21 October 1948, Boeing engineers George S. Schairer, Art Carlsen, and Vaughn Blumenthal presented the design of a four-engine turboprop bomber to the chief of bomber development, Colonel Pete Warden. Warden was disappointed by the projected aircraft and asked if the Boeing team could produce a proposal for a four-engine turbojet bomber. Joined by Ed Wells, Boeing's vice president of engineering, the engineers worked that night in The Hotel Van Cleve in Dayton, Ohio, redesigning Boeing's proposal as a four-engine turbojet bomber. On Friday, Colonel Warden looked over the information and asked for a better design. Returning to the hotel, the Boeing team was joined by Bob Withington and Maynard Pennell, two top Boeing engineers who were in town on other business.
By late Friday night, they had laid out what was an essentially new airplane. The new design (464-49) built upon the basic layout of the B-47 Stratojet with 35-degree swept wings, eight engines paired in four underwing pods, and bicycle landing gear with wingtip outrigger wheels. A notable feature was the ability to pivot both fore and aft main landing gear up to 20° from the aircraft centerline to increase safety during crosswind landings (allowing the aircraft to "crab" or roll with a sideways slip angle down the runway). After a trip to a hobby shop for supplies, Schairer set to work building a model. The rest of the team focused on weight and performance data. Wells, who was also a skilled artist, completed the aircraft drawings. On Sunday, a stenographer was hired to type a clean copy of the proposal. On Monday, Schairer presented Colonel Warden with a neatly bound 33-page proposal and a scale model. The aircraft was projected to exceed all design specifications.
Although the full-size mock-up inspection in April 1949 was generally favorable, range again became a concern since the J40s and early model J57s had excessive fuel consumption. Despite talk of another revision of specifications or even a full design competition among aircraft manufacturers, General LeMay, now in charge of Strategic Air Command, insisted that performance should not be compromised due to delays in engine development. In a final attempt to increase range, Boeing created the larger 464-67, stating that once in production, the range could be further increased in subsequent modifications. Following several direct interventions by LeMay, Boeing was awarded a production contract for thirteen B-52As and seventeen detachable reconnaissance pods on 14 February 1951. The last major design change—also at General LeMay's insistence—was a switch from the B-47 style tandem seating to a more conventional side-by-side cockpit, which increased the effectiveness of the copilot and reduced crew fatigue. Both XB-52 prototypes featured the original tandem seating arrangement with a framed bubble-type canopy (see above images).
Tex Johnston noted, "The B-52, like the B-47, utilized a flexible wing. I saw the wingtip of the B-52 static test airplane travel , from the negative 1-G load position to the positive 4-G load position." The flexible structure allowed "...the wing to flex during gust and maneuvering loads, thus relieving high-stress areas and providing a smoother ride." During a 3.5-G pullup, "The wingtips appeared about 35 degrees above level flight position."
Pre-production and production
During ground testing on 29 November 1951, the XB-52's pneumatic system failed during a full-pressure test; the resulting explosion severely damaged the trailing edge of the wing, necessitating considerable repairs. The YB-52, the second XB-52 modified with more operational equipment, first flew on 15 April 1952 with "Tex" Johnston as the pilot. A 2-hour, 21-minute proving flight from Boeing Field, near Seattle, Washington, to Larson Air Force Base was undertaken with Boeing test pilot Johnston and USAF Lieutenant Colonel Guy M. Townsend. The XB-52 followed on 2 October 1952. The thorough development, including 670 days in the wind tunnel and 130 days of aerodynamic and aeroelastic testing, paid off with smooth flight testing. Encouraged, the USAF increased its order to 282 B-52s.
Only three of the 13 B-52As ordered were built. All were returned to Boeing and used in their test program. On 9 June 1952, the February 1951 contract was updated to order the aircraft under new specifications. The final 10, the first aircraft to enter active service, were completed as B-52Bs. At the roll-out ceremony on 18 March 1954, Air Force Chief of Staff General Nathan Twining said:
The B-52B was followed by progressively improved bomber and reconnaissance variants, culminating in the B-52G and turbofan B-52H. To allow rapid delivery, production lines were set up both at its main Seattle factory and at Boeing's Wichita facility. More than 5,000 companies were involved in the huge production effort, with 41% of the airframe being built by subcontractors. The prototypes and all B-52A, B and C models (90 aircraft) were built at Seattle. Testing of aircraft built in Seattle caused problems due to jet noise, which led to the establishment of curfews for engine tests. Aircraft were ferried east on their maiden flights to Larson Air Force Base near Moses Lake, where they were fully tested.
As production of the B-47 came to an end, the Wichita factory was phased in for B-52D production, with Seattle responsible for 101 D-models and Wichita 69. Both plants continued to build the B-52E, with 42 built at Seattle and 58 at Wichita, and the B-52F (44 from Seattle and 45 from Wichita). For the B-52G, Boeing decided in 1957 to transfer all production to Wichita, which freed up Seattle for other tasks, in particular, the production of airliners. Production ended in 1962 with the B-52H, with 742 aircraft built, plus the original two prototypes.
Upgrades
A proposed variant of the B-52H was the EB-52H, which would have consisted of 16 modified and augmented B-52H airframes with additional electronic jamming capabilities. This variant would have restored USAF airborne jamming capability that it lost on retiring the EF-111 Raven. The program was canceled in 2005 following the removal of funds for the stand-off jammer. The program was revived in 2007 and cut again in early 2009.
In July 2013, the USAF began a fleet-wide technological upgrade of its B-52 bombers called Combat Network Communications Technology (CONECT) to modernize electronics, communications technology, computing, and avionics on the flight deck. CONECT upgrades include software and hardware such as new computer servers, modems, radios, data-links, receivers, and digital workstations for the crew. One update is the AN/ARC-210 Warrior beyond-line-of-sight software programmable radio able to transmit voice, data, and information in-flight between B-52s and ground command and control centers, allowing the transmission and reception of data with updated intelligence, mapping, and targeting information; previous in-flight target changes required copying down coordinates. The ARC-210 allows machine-to-machine transfer of data, useful on long-endurance missions where targets may have moved before the arrival of the B-52. The aircraft will be able to receive information through Link-16. CONECT upgrades will cost billion overall and take several years. Funding has been secured for 30 B-52s; the USAF hopes for 10 CONECT upgrades per year, but the rate has yet to be decided.
Weapons upgrades include the 1760 Internal Weapons Bay Upgrade (IWBU), which gives a 66 percent increase in weapons payload using a digital interface (MIL-STD-1760) and rotary launcher. IWBU is expected to cost roughly million. The 1760 IWBU will allow the B-52 to carry eight JDAM bombs, AGM-158B JASSM-ER cruise missile and the ADM-160C MALD-J decoy missiles internally. All 1760 IWBUs should be operational by October 2017. Two bombers will have the ability to carry 40 weapons in place of the 36 that three B-52s can carry. The 1760 IWBU allows precision-guided missiles or bombs to be deployed from inside the weapons bay; the previous aircraft carried these munitions externally on the wing hardpoints. This increases the number of guided weapons (Joint Direct Attack Munition or JDAM) a B-52 can carry and reduces the need for guided bombs to be carried on the wings. The first phase will allow a B-52 to carry twenty-four GBU-38 500-pound guided bombs or twenty GBU-31 2,000-pound bombs, with later phases accommodating the JASSM and MALD family of missiles. In addition to carrying more smart bombs, moving them internally from the wings reduces drag and achieves a 15 percent reduction in fuel consumption.
The US Air Force Research Lab is investigating defensive laser weapons for the B-52.
The B-52 is due to receive a range of upgrades alongside a planned engine retrofit. These upgrades aim to modernize the sensors and displays of the B-52. They include the new APG-79B4 Active electronically scanned array radar, replacing older mechanically scanned arrays, the streamlining of the nose and deletion of blisters housing the forward-looking infrared/electro-optical viewing system. In October 2022 Boeing released new images of what the upgrade would look like. The upgrades will also include improved communication systems, new pylons, new cockpit displays and the deletion of one crew station. The changes will carry the designation B-52J.
Design
Overview
The B-52 shared many technological similarities with the preceding B-47 Stratojet strategic bomber. The two aircraft used the same basic design, such as swept wings and podded jet engines, and the cabin included the crew ejection systems. On the B-52D, the pilots and electronic countermeasures (ECM) operator ejected upwards, while the lower deck crew ejected downwards; until the B-52G, the gunner had to jettison the tail gun to bail out. The tail gunner in early model B-52s was located in the traditional location in the tail of the plane, with both visual and radar gun laying systems; in later models, the gunner was moved to the front of the fuselage, with gun laying carried out by radar alone, much like the B-58 Hustler's tail gun system.
Structural fatigue was accelerated by at least a factor of eight in a low-altitude flight profile over that of high-altitude flying, requiring costly repairs to extend service life. In the early 1960s, the three-phase High Stress program was launched to counter structural fatigue, enrolling aircraft at 2,000 flying hours. Follow-up programs were conducted, such as a 2,000-hour service life extension to select airframes in 1966–1968, and the extensive Pacer Plank reskinning, completed in 1977. The wet wing introduced on G and H models was even more susceptible to fatigue, experiencing 60% more stress during a flight than the old wing. The wings were modified by 1964 under ECP 1050. This was followed by a fuselage skin and longeron replacement (ECP 1185) in 1966, and the B-52 Stability Augmentation and Flight Control program (ECP 1195) in 1967. Fuel leaks due to deteriorating Marman clamps continued to plague all variants of the B-52. To this end, all aircraft variants were subjected to Blue Band (1957), Hard Shell (1958), and finally QuickClip (1958) programs. The latter fitted safety straps that prevented catastrophic loss of fuel in case of clamp failure. The B-52's service ceiling is officially listed as , but operational experience shows this is difficult to reach when fully laden with bombs. According to one source: "The optimal altitude for a combat mission was around , because to exceed that height would rapidly degrade the plane's range."
In September 2006, the B-52 became one of the first US military aircraft to fly using alternative fuel. It took off from Edwards Air Force Base with a 50/50 blend of Fischer–Tropsch process (FT) synthetic fuel and conventional JP-8 jet fuel, which burned in two of the eight engines. On 15 December 2006, a B-52 took off from Edwards with the synthetic fuel powering all eight engines, the first time a USAF aircraft was entirely powered by the blend. The seven-hour flight was considered a success. This program is part of the Department of Defense Assured Fuel Initiative, which aimed to reduce crude oil usage and obtain half of its aviation fuel from alternative sources by 2016. On 8 August 2007, Air Force Secretary Michael Wynne certified the B-52H as fully approved to use the FT blend.
Flight controls
Because of the B-52's mission parameters, only modest maneuvers would be required with no need for spin recovery. The aircraft has a relatively small, narrow chord rudder, giving it limited yaw control authority. Originally an all-moving vertical stabilizer was to be used but was abandoned because of doubts about hydraulic actuator reliability. Because the aircraft has eight engines, asymmetrical thrust due to the loss of an engine in flight would be minimal and correctable with the narrow rudder. To assist with crosswind takeoffs and landings the main landing gear can be pivoted 20 degrees to either side from neutral. The crew would preset the yaw adjustable crosswind landing gear according to wind observations made on the ground.
Like the rudder, the elevator is also very narrow chord and the B-52 suffers from limited elevator control authority. For long-term pitch trim and airspeed changes the aircraft uses a stabilator (or all-moving tail) with the elevator used for small adjustments within a stabilizer setting. The stabilizer is adjustable through 13 degrees of movement (nine up, four down) and is crucial to operations during takeoff and landing due to large pitch changes induced by flap application.
B-52s prior to the G models had very small ailerons with a short span that was approximately equal to their chord. These "feeler ailerons" were used to provide feedback forces to the pilot's control yoke and to fine-tune the roll axes during delicate maneuvers such as aerial refueling. Due to twisting of the thin main wing, conventional outboard flap-type ailerons would lose authority and therefore could not be used. In other words, aileron activation would cause the wing to twist, undermining roll control. Six spoilerons on each wing are responsible for the majority of roll control. The late B-52G models eliminated the ailerons altogether and added an extra spoileron to each wing. Partly because of the lack of ailerons, the B-52G and H models were more susceptible to Dutch roll.
Avionics
Ongoing problems with avionics systems were addressed in the Jolly Well program, completed in 1964, which improved components of the AN/ASQ-38 bombing navigational computer and the terrain computer. The MADREC (Malfunction Detection and Recording) upgrade fitted to most aircraft by 1965 could detect failures in avionics and weapons computer systems and was essential in monitoring the AGM-28 Hound Dog missiles. The electronic countermeasures capability of the B-52 was expanded with Rivet Rambler (1971) and Rivet Ace (1973).
To improve operations at low altitudes, the AN/ASQ-151 Electro-Optical Viewing System (EVS), which consisted of a low light level television (LLLTV) and a forward looking infrared (FLIR) system mounted in blisters under the noses of B-52Gs and Hs between 1972 and 1976. The navigational capabilities of the B-52 were later augmented with the addition of GPS in the 1980s. The IBM AP-101, also used on the Rockwell B-1 Lancer bomber and the Space Shuttle, was the B-52's main computer.
In 2007, the LITENING targeting pod was fitted, which increased the effectiveness of the aircraft in the attack of ground targets with a variety of standoff weapons, using laser guidance, a high-resolution forward-looking infrared sensor (FLIR), and a CCD camera used to obtain target imagery. LITENING pods have been fitted to a wide variety of other US aircraft, such as the McDonnell Douglas F/A-18 Hornet, the General Dynamics F-16 Fighting Falcon and the McDonnell Douglas AV-8B Harrier II.
Armament
The ability to carry up to 20 AGM-69 SRAM nuclear missiles was added to G and H models, starting in 1971. To further improve its offensive ability, air-launched cruise missiles (ALCMs) were fitted. After testing of both the USAF-backed Boeing AGM-86 Air Launched Cruise Missile and the Navy-backed General Dynamics AGM-109 Tomahawk, the AGM-86B was selected for operation by the B-52 (and ultimately by the B-1 Lancer). A total of 194 B-52Gs and Hs were modified to carry AGM-86s, carrying 12 missiles on underwing pylons, with 82 B-52Hs further modified to carry another eight missiles on a rotary launcher fitted in the bomb bay. To conform with SALT II Treaty requirements that cruise missile-capable aircraft be readily identifiable by reconnaissance satellites, the cruise missile-armed B-52Gs were modified with a distinctive wing root fairing. As all B-52Hs were assumed modified, no visual modification of these aircraft was required. In 1990, the stealthy AGM-129 ACM cruise missile entered service; although intended to replace the AGM-86, the high cost and the Cold War's end led to only 450 being produced; unlike the AGM-86, no conventional, non-nuclear version was built. The B-52 was to have been modified to utilize Northrop Grumman's AGM-137 TSSAM weapon; however, the missile was canceled due to development costs.
Those B-52Gs not converted as cruise missile carriers underwent a series of modifications to improve conventional bombing. They were fitted with a new Integrated Conventional Stores Management System (ICSMS) and new underwing pylons that could hold larger bombs or other stores than the external pylons could. Thirty B-52Gs were further modified to carry up to 12 AGM-84 Harpoon anti-ship missiles each, while 12 B-52Gs were fitted to carry the AGM-142 Have Nap stand-off air-to-ground missile. When the B-52G was retired in 1994, an urgent scheme was launched to restore an interim Harpoon and Have Nap capability, the four aircraft being modified to carry Harpoon and four to carry Have Nap under the Rapid Eight program.
The Conventional Enhancement Modification (CEM) program gave the B-52H a more comprehensive conventional weapons capability, adding the modified underwing weapon pylons used by conventional-armed B-52Gs, Harpoon and Have Nap, and the capability to carry new-generation weapons including the Joint Direct Attack Munition (JDAM) and Wind Corrected Munitions Dispenser guided bombs, the AGM-154 glide bomb and the AGM-158 JASSM missile. The CEM program also introduced new radios, integrated Global Positioning System into the aircraft's navigation system, and replaced the under-nose FLIR with a more modern unit. Forty-seven B-52Hs were modified under the CEM program by 1996, with 19 more by the end of 1999.
By around 2010, U.S. Strategic Command stopped assigning B61 and B83 nuclear gravity bombs to B-52, and later listed only the B-2 as tasked with delivering strategic nuclear bombs in budget requests. Nuclear gravity bombs were removed from the B-52's capabilities because it is no longer considered survivable enough to penetrate modern air defenses, instead relying on nuclear cruise missiles and focusing on expanding its conventional strike role. The 2019 "Safety Rules for U.S. Strategic Bomber Aircraft" manual subsequently confirmed the removal of B61-7 and B83-1 gravity bombs from the B-52H's approved weapons configuration.
Starting in 2016, Boeing is to upgrade the internal rotary launchers to the MIL-STD-1760 interface to enable the internal carriage of smart bombs, which previously could be carried only on the wings.
While the B-1 Lancer has a larger theoretical maximum payload of compared to the B-52's , the bombers are rarely able to carry their full loads. The most the B-52 carries is a full load of AGM-86Bs totaling . The B-1 has the internal weapons bay space to carry more GBU-31 JDAMs and JASSMs, but the B-52 upgraded with the conventional rotary launcher can carry more of other JDAM variants.
The AGM-183A Air-Launched Rapid Response (ARRW) hypersonic missile and the future Long Range Stand Off (LRSO) nuclear-armed air-launched cruise missile will join the B-52 inventory in the future.
Engines
The eight engines of the B-52 are paired in pods and suspended by four pylons beneath and forward of the wings' leading edge. The careful arrangement of the pylons also allowed them to work as wing fences and delay the onset of stall. The first two prototypes, XB-52 and YB-52, were both powered by experimental Pratt & Whitney YJ57-P-3 turbojet engines with of static thrust each.
The B-52A models were equipped with Pratt & Whitney J57-P-1W turbojets, providing a dry thrust of which could be increased for short periods to with water injection. The water was carried in a tank in the rear fuselage.
B-52B, C, D and E models were equipped with Pratt & Whitney J57-P-29W, J57-P-29WA, or J57-P-19W series engines all rated at . The B-52F and G models were powered by Pratt & Whitney J57-P-43WB turbojets, each rated at static thrust with water injection.
On 9 May 1961, the B-52H began to be delivered to the USAF with cleaner burning and quieter Pratt & Whitney TF33-P-3 turbofans with a maximum thrust of .
Engine retrofit
In a study for the USAF in the mid-1970s, Boeing investigated replacing the engines, changing to a new wing, and other improvements to upgrade B-52G/H aircraft as an alternative to the B-1A, then in development.
In 1996, Rolls-Royce and Boeing jointly proposed fitting each B-52 with four leased Rolls-Royce RB211 engines. This would have involved replacing the eight Pratt & Whitney TF33 engines (total thrust ) with four RB211-535E4 engines (total thrust ), which would increase range and reduce fuel consumption. However, a USAF analysis in 1997 concluded that Boeing's estimated savings of billion would not be realized and that reengining would instead cost billion more than keeping the existing engines, citing significant up-front procurement and re-tooling expenditure.
The USAF's 1997 rejection of reengining was subsequently disputed in a Defense Science Board (DSB) report in 2003. The DSB urged the USAF to re-engine the aircraft without delay, saying doing so would not only create significant cost savings but reduce greenhouse gas emissions and increase aircraft range and endurance; these conclusions were in line with the conclusions of a separate Congress-funded study conducted in 2003. Criticizing the USAF cost analysis, the DSB found that among other things, the USAF failed to account for the cost of aerial refueling; the DSB estimated that aerial refueling cost , whereas the USAF had failed to account for the cost of delivering the fuel and so had only priced fuel at .
On 23 April 2020, the USAF released its request for proposals for 608 commercial engines plus spares and support equipment, with the plan to award the contract in May 2021. This Commercial Engine Reengining Program (CERP) saw General Electric propose its CF34-10 and Passport turbofans, Pratt & Whitney its PW800, and the Rolls-Royce BR725 to be designated F130. On 24 September 2021, the USAF selected the Rolls-Royce F130 as the winner and announced plans to purchase 650 engines (608 direct replacements and 42 spares), for billion.
Unlike the previous re-engine proposal which also involved reducing the number of engines from eight to four, the F130 re-engine program maintains eight engines on the B-52. Although four-engine operation would be more efficient, retrofitting the airframe to operate with only four engines would involve additional changes to the aircraft's systems and control surfaces (particularly the rudder), thereby increasing the time, cost, and complexity of the project. B-52Hs upgraded with Rolls Royce F130 engines will be redesignated as "B-52Js".
Costs
Operational history
Introduction
Although the B-52A was the first production variant, these aircraft were used only in testing. The first operational version was the B-52B which had been developed in parallel with the prototypes since 1951. First flying in December 1954, B-52B, AF Serial Number 52-8711, entered operational service with 93rd Heavy Bombardment Wing (93rd BW) at Castle Air Force Base, California, on 29 June 1955. The wing became operational on 12 March 1956. The training for B-52 crews consisted of five weeks of ground school and four weeks of flying, accumulating 35 to 50 hours in the air. The new B-52Bs replaced operational B-36s on a one-to-one basis.
Early operations were problematic; in addition to supply problems, there were also technical issues. Ramps and taxiways deteriorated under the aircraft's weight, the fuel system was prone to leaks and icing, and bombing and fire control computers were unreliable. The split-level cockpit presented a temperature control problem– the pilots' cockpit was heated by sunlight while the observer and the navigator on the bottom deck sat on the ice-cold floor. Thus, a comfortable temperature setting for the pilots caused the other crew members to freeze, while a comfortable temperature for the bottom crew caused the pilots to overheat. The J57 engines proved unreliable. Alternator failure caused the first fatal B-52 crash in February 1956; as a result, the fleet was briefly grounded. In July, fuel and hydraulic issues grounded the B-52s again. In response to maintenance issues, the USAF set up "Sky Speed" teams of 50 contractors at each B-52 base to perform maintenance and routine checkups, taking an average of one week per aircraft.
On 21 May 1956, a B-52B (52-13) dropped a Mk-15 nuclear bomb over the Bikini Atoll in a test code-named Cherokee. It was the first air-dropped thermonuclear weapon. This aircraft now is on display at the National Museum of Nuclear Science and History in Albuquerque, NM. From 24 to 25 November 1956, four B-52Bs of the 93rd BW and four B-52Cs of the 42nd BW flew nonstop around the perimeter of North America in Operation Quick Kick, which covered in 31 hours, 30 minutes. SAC noted the flight time could have been reduced by 5 to 6 hours had the four inflight refuelings been done by fast jet-powered tanker aircraft rather than propeller-driven Boeing KC-97 Stratofreighters. In a demonstration of the B-52's global reach, from 16 to 18 January 1957, three B-52Bs made a non-stop flight around the world during Operation Power Flite, during which was covered in 45 hours 19 minutes () with several in-flight refuelings by KC-97s.
The B-52 set many records over the next few years. On 26 September 1958, a B-52D set a world speed record of over a closed circuit without a payload. The same day, another B-52D established a world speed record of over a closed circuit without a payload. On 14 December 1960, a B-52G set a world distance record by flying unrefueled for ; the flight lasted 19 hours 44 minutes (). From 10 to 11 January 1962, a B-52H (60-40) set a world distance record by flying unrefueled, surpassing the prior B-52 record set two years earlier, from Kadena Air Base, Okinawa Prefecture, Japan, to Torrejón Air Base, Spain, which covered . The flight passed over Seattle, Fort Worth and the Azores.
Cold War
When the B-52 entered service, the Strategic Air Command (SAC) intended to use it to deter and counteract the vast and modernizing Soviet Union's military. As the Soviet Union increased its nuclear capabilities, destroying or "countering" the forces that would deliver nuclear strikes (bombers, missiles, etc.) became of great strategic importance. The Eisenhower administration endorsed this switch in focus; the President in 1954 expressed a preference for military targets over civilian ones, a principle reinforced in the Single Integrated Operation Plan (SIOP), a plan of action in the case of nuclear war breaking out.
Throughout the Cold War, B-52s and other US strategic bombers performed airborne alert patrols under code names such as Head Start, Chrome Dome, Hard Head, Round Robin and Giant Lance. Bombers loitered at high altitudes near the borders of the Soviet Union to provide rapid first strike or retaliation capability in case of nuclear war. These airborne patrols formed one component of the US's nuclear deterrent, which would act to prevent the breakout of a large-scale war between the US and the Soviet Union under the concept of Mutually Assured Destruction.
Due to the late 1950s-era threat of surface-to-air missiles (SAMs) that could threaten high-altitude aircraft, seen in practice in the 1960 U-2 incident, the intended use of B-52 was changed to serve as a low-level penetration bomber during a foreseen attack upon the Soviet Union, as terrain masking provided an effective method of avoiding radar and thus the threat of the SAMs. The aircraft was planned to fly towards the target at and deliver their weapons from or lower. Although never intended for the low-level role, the B-52's flexibility allowed it to outlast several intended successors as the nature of aerial warfare changed. The B-52's large airframe enabled the addition of multiple design improvements, new equipment, and other adaptations over its service life.
In November 1959, to improve the aircraft's combat capabilities in the changing strategic environment, SAC initiated the Big Four modification program (also known as Modification 1000) for all operational B-52s except early B models. The program was completed by 1963. The four modifications were the ability to launch AGM-28 Hound Dog standoff nuclear missiles and ADM-20 Quail decoys, an advanced electronic countermeasures (ECM) suite, and upgrades to perform the all-weather, low-altitude (below 500 feet or 150 m) interdiction mission in the face of advancing Soviet missile-based air defenses.
In the 1960s, there were concerns over the fleet's capable lifespan. Several projects beyond the B-52, the Convair B-58 Hustler and North American XB-70 Valkyrie, had either been aborted or proved disappointing in light of changing requirements, which left the older B-52 as the main bomber as opposed to the planned successive aircraft models. On 19 February 1965, General Curtis E. LeMay testified to Congress that the lack of a follow-up bomber project to the B-52 raised the danger that, "The B-52 is going to fall apart on us before we can get a replacement for it." Other aircraft, such as the General Dynamics F-111 Aardvark, later complemented the B-52 in roles the aircraft was not as capable in, such as missions involving high-speed, low-level penetration dashes.
Vietnam War
With the escalating situation in Southeast Asia, 28 B-52Fs were fitted with external racks for 24 of the bombs under project South Bay in June 1964; an additional 46 aircraft received similar modifications under project Sun Bath. In March 1965, the United States commenced Operation Rolling Thunder. The first combat mission, Operation Arc Light, was flown by B-52Fs on 18 June 1965, when 30 bombers of the 9th and 441st Bombardment Squadrons struck a communist stronghold near the Bến Cát District in South Vietnam. The first wave of bombers arrived too early at a designated rendezvous point, and while maneuvering to maintain station, two B-52s collided, which resulted in the loss of both bombers and eight crewmen. The remaining bombers, minus one more that turned back due to mechanical problems, continued towards the target. Twenty-seven Stratofortresses bombed a target box from between , with a little more than 50% of the bombs falling within the target zone. The force returned to Andersen Air Force Base except for one bomber with electrical problems that recovered to Clark Air Base, the mission having lasted 13 hours. Post-strike assessment by teams of South Vietnamese troops with American advisors found evidence that the Viet Cong had departed from the area before the raid, and it was suspected that infiltration of the south's forces may have tipped off the north because of the South Vietnamese Army troops involved in the post-strike inspection.
Beginning in late 1965, a number of B-52Ds underwent Big Belly modifications to increase bomb capacity for carpet bombings. While the external payload remained at 24 of or bombs, the internal capacity increased from 27 to 84 for bombs, or from 27 to 42 for bombs. The modification created enough capacity for a total of using 108 bombs. Thus modified, B-52Ds could carry more than B-52Fs. Designed to replace B-52Fs, modified B-52Ds entered combat in April 1966 flying from Andersen Air Force Base, Guam. Each bombing mission lasted 10 to 12 hours and included an aerial refueling by KC-135 Stratotankers. In spring 1967, B-52s began flying from U-Tapao Airfield in Thailand so that refueling was not required.
B-52s were employed during the Battle of Ia Drang in November 1965, notable as the aircraft's first use in a tactical support role.
On 22 November 1972, a B-52D (55-110) from U-Tapao was hit by a SAM while on a raid over Vinh. The crew was forced to abandon the damaged aircraft over Thailand. This was the first B-52 destroyed by hostile fire.
The zenith of B-52 attacks in Vietnam was Operation Linebacker II (also known as the Christmas bombings), conducted from 18 to 29 December 1972, which consisted of waves of B-52s (mostly D models, but some Gs without jamming equipment and with a smaller bomb load). Over 12 days, B-52s flew 729 sorties and dropped 15,237 tons of bombs on Hanoi, Haiphong, and other targets in North Vietnam. Originally 42 B-52s were committed to the war; however, numbers were frequently twice this figure. During Operation Linebacker II, fifteen B-52s were shot down, five were heavily damaged (one crashed in Laos), and five suffered medium damage. A total of 25 crewmen were killed in these losses. During the war, 31 B-52s were lost, including ten shot down over North Vietnam.
Air-to-air combat
During the Vietnam War, B-52D tail gunners were credited with shooting down two MiG-21 "Fishbeds". On 18 December 1972 tail gunner Staff Sergeant Samuel O. Turner's B-52 had just completed a bomb run for Operation Linebacker II and was turning away when a Vietnam People's Air Force (VPAF) MiG-21 approached. The MiG and the B-52 locked onto each other. When the fighter drew within range, Turner fired his quad (four guns on one mounting) .50 (12.7 mm) caliber machine guns. The MiG exploded aft of the bomber, as confirmed by Master Sergeant Louis E. Le Blanc, the tail gunner in a nearby Stratofortress. Turner received a Silver Star for his actions. His B-52, tail number 56-676, is preserved on display with air-to-air kill markings at Fairchild Air Force Base in Spokane, Washington.
On 24 December 1972, during the same bombing campaign, the B-52 Diamond Lil was headed to bomb the Thái Nguyên railroad yards when tail gunner Airman First Class Albert E. Moore spotted a fast-approaching MiG-21. Moore opened fire with his quad .50 caliber guns at , and kept shooting until the fighter disappeared from his scope. Technical Sergeant Clarence W. Chute, a tail gunner aboard another Stratofortress, watched the MiG catch fire and fall away; this was not confirmed by the VPAF. Diamond Lil is preserved on display at the United States Air Force Academy in Colorado. Moore was the last bomber gunner believed to have shot down an enemy aircraft with machine guns in aerial combat.
The two B-52 tail gunner kills were not confirmed by VPAF, and they admitted to the loss of only three MiGs, all by F-4s. Vietnamese sources have attributed a third air-to-air victory to a B-52, a MiG-21 shot down on 16 April 1972. These victories make the B-52 the largest aircraft credited with air-to-air kills. The last Arc Light mission without fighter escort took place on 15 August 1973, as U.S. military action in Southeast Asia was wound down.
Post-Vietnam War service
B-52Bs reached the end of their structural service life by the mid-1960s and all were retired by June 1966, followed by the last of the B-52Cs on 29 September 1971; except for NASA's B-52B "008" which was eventually retired in 2004 at Edwards Air Force Base, California. Another of the remaining B Models, "52-005" is on display at the Wings Over the Rockies Air and Space Museum in Denver, Colorado.
A few time-expired E models were retired in 1967 and 1968, but the bulk (82) were retired between May 1969 and March 1970. Most F models were also retired between 1967 and 1973, but 23 survived as trainers until late 1978. The fleet of D models served much longer; 80 D models were extensively overhauled under the Pacer Plank program during the mid-1970s. Skinning on the lower wing and fuselage was replaced, and various structural components were renewed. The fleet of D models stayed largely intact until late 1978 when 37 not already upgraded Ds were retired. The remainder were retired between 1982 and 1983.
The remaining G and H models were used for nuclear standby ("alert") duty as part of the United States' nuclear triad; the combination of nuclear-armed land-based missiles, submarine-based missiles, and manned bombers. The B-1, intended to supplant the B-52, replaced only the older models and the supersonic FB-111. In 1991, B-52s ceased continuous 24-hour SAC alert duty.
After Vietnam, the experience of operations in a hostile air defense environment was considered. Due to this, B-52s were modernized with new weapons, equipment, and both offensive and defensive avionics. This, and the use of low-level tactics, marked a major shift in the B-52's utility. The upgrades were:
Supersonic short-range nuclear missiles: G and H models were modified to carry up to 20 SRAM missiles replacing existing gravity bombs. Eight SRAMs were carried internally on a special rotary launcher and 12 SRAMs were mounted on two wing pylons. With SRAM, the B-52s could strike heavily defended targets without entering the terminal defenses.
New countermeasures: Phase VI ECM modification was the sixth major ECM program for the B-52. It improved the aircraft's self-protection capability in the dense Soviet air defense environment. The new equipment expanded signal coverage, improved threat warnings, provided new countermeasures techniques, and increased the quantity of expendables. The power requirements of Phase VI ECM also consumed most of the excess electrical capacity on the B-52G.
B-52G and Hs were also modified with an electro-optical viewing system (EVS) that made low-level operations and terrain avoidance much easier and safer. EVS system contained a low light level television (LLTV) camera and a forward-looking infrared (FLIR) camera to display information needed for penetration at lower altitudes.
Subsonic-cruise unarmed decoy: SCUD resembled the B-52 on the radar. As an active decoy, it carried ECM and other devices, and it had a range of several hundred miles. Although SCUD was never deployed operationally, the concept was developed, becoming known as the air-launched cruise missile (ALCM-A).
These modifications increased weight by nearly and decreased operational range by 8–11%. This was considered acceptable for the increase in capabilities.
After the fall of the Soviet Union, all B-52Gs remaining in service were destroyed in accordance with the terms of the Strategic Arms Reduction Treaty (START). The Aerospace Maintenance and Regeneration Center (AMRC) cut the 365 B-52s into pieces. Russia verified the completion destruction task via satellite and first-person inspection at the AMARC facility.
Gulf War and later
B-52 strikes were an important part of Operation Desert Storm. Starting on 16 January 1991, a flight of B-52Gs flew from Barksdale Air Force Base, Louisiana, refueled in the air en route, struck targets in Iraq, and returned home a journey of 35 hours and round trip. It set a record for the longest-distance combat mission, breaking the record previously held by an RAF Vulcan bomber in 1982; however, this was achieved using forward refueling. Those seven B-52s flew the first combat sorties of Operation Desert Storm, firing 35 AGM-86C CALCM standoff missiles and successfully destroying 85–95 percent of their targets. B-52Gs operating from the King Abdullah Air Base at Jeddah, Saudi Arabia, RAF Fairford in the United Kingdom, Morón Air Base, Spain, and the island of Diego Garcia in the British Indian Ocean Territory flew bombing missions over Iraq, initially at low altitude. After the first three nights, the B-52s moved to high-altitude missions instead, which reduced their effectiveness and psychological impact compared to the low-altitude role initially played.
The conventional strikes were carried out by three bombers, which dropped up to 153 of the M117 bomb over an area of . The bombings demoralized the defending Iraqi troops, many of whom surrendered in the wake of the strikes. In 1999, the science and technology magazine Popular Mechanics described the B-52's role in the conflict: "The Buff's value was made clear during the Gulf War and Desert Fox. The B-52 turned out the lights in Baghdad." During Operation Desert Storm, B-52s flew about 1,620 sorties and delivered 40% of the weapons dropped by coalition forces.
During the conflict, several claims of Iraqi air-to-air successes were made, including an Iraqi pilot, Khudai Hijab, who allegedly fired a Vympel R-27R missile from his MiG-29 and damaged a B-52G on the opening night of the Gulf War. However, the USAF disputes this claim, stating the bomber was actually hit by friendly fire, an AGM-88 High-speed, Anti-Radiation Missile (HARM) that homed on the fire-control radar of the B-52's tail gun; the jet was subsequently nicknamed In HARM's Way. Shortly following this incident, General George Lee Butler announced that the gunner position on B-52 crews would be eliminated, and the gun turrets permanently deactivated, commencing on 1 October 1991.
Since the mid-1990s, the B-52H has been the only variant remaining in military service; it is currently stationed at:
Minot Air Force Base, North Dakota 5th Bomb Wing
Barksdale Air Force Base, Louisiana 2nd Bomb Wing (active Air Force) and 307th Bomb Wing (Air Force Reserve Command)
One B-52H is assigned to Edwards Air Force Base and is used by Air Force Materiel Command at the USAF Flight Test Center.
One additional B-52H has been used by NASA at Dryden Flight Research Center (now Armstrong Flight Research Center), California as part of the Heavy-lift Airborne Launch program.
From 2 to 3 September 1996, two B-52Hs conducted a mission as part of Operation Desert Strike. The B-52s struck Baghdad power stations and communications facilities with 13 AGM-86C conventional air-launched cruise missiles (CALCM) during a 34-hour, round trip mission from Andersen Air Force Base, Guam, the longest distance ever flown for a combat mission.
On 24 March 1999, when Operation Allied Force began, B-52 bombers bombarded Serb targets throughout the Federal Republic of Yugoslavia, including during the Battle of Kosare.
The B-52 contributed to Operation Enduring Freedom in 2001 (Afghanistan/Southwest Asia), providing the ability to loiter high above the battlefield and provide Close Air Support (CAS) through the use of precision-guided munitions, a mission which previously would have been restricted to fighter and ground attack aircraft. In late 2001, ten B-52s dropped a third of the bomb tonnage in Afghanistan. B-52s also played a role in Operation Iraqi Freedom, which commenced on 20 March 2003 (Iraq/Southwest Asia). On the night of 21 March 2003, B-52Hs launched at least 100 AGM-86C CALCMs at targets within Iraq.
B-52 and maritime operations
The B-52 can be employed in ocean surveillance, anti-ship and mine-laying operations. For example, a pair of B-52s, in two hours, can monitor of the ocean surface. During the 2018 Baltops exercise, B-52s conducted mine-laying missions off the coast of Sweden, simulating a counter-amphibious invasion mission in the Baltic.
In the 1970s, the U.S. Navy worried that combined attacks from Soviet bombers, submarines, and warships could overwhelm its defenses and sink its aircraft carriers. After the Falklands War, US planners feared the damage that could be created by -range missiles carried by Tupolev Tu-22M "Backfire" bombers and -range missiles carried by Soviet surface ships. New US Navy maritime strategy in the early 1980s called for the aggressive use of carriers and surface action groups against the Soviet navy. To help protect the carrier battle groups, some B-52Gs were modified to fire Harpoon anti-ship missiles. These bombers were based in Guam and Maine in the later 1970s to support both the Atlantic and Pacific fleets. In case of war, B-52s would coordinate with tanker support and surveillance aircraft. B-52Gs could strike Soviet Navy targets on the flanks of the US carrier battle groups, leaving them free to concentrate on offensive strikes against Soviet surface combatants. Mines laid by B-52s could establish minefields in significant enemy chokepoints (mainly the Kuril Islands and the GIUK gap). These minefields would force the Soviet fleet to disperse, making individual ships more vulnerable to Harpoon attacks.
From the 1980s, B-52Hs were modified to use a wide range of cruise missiles, laser- and satellite-guided bombs, and unguided munitions. B-52 bomber crews honed sea-skimming flight profiles that would allow them to penetrate stiff enemy defenses and attack Soviet ships.
Recent expansion and modernization of the People's Liberation Army Navy of China has caused the USAF to re-implement strategies for finding and attacking ships. The B-52 fleet has been certified to use the Quickstrike family of naval mines using JDAM-ER guided wing kits. This weapon provides the ability to lay minefields over wide areas, in a single pass, with extreme accuracy, at a range of over . Besides this, with a view to enhancing B-52 maritime patrol and strike performance, an AN/ASQ-236 Dragon's Eye underwing pod, has also been certified for use by B-52H bombers. Dragon's Eye contains an advanced electronically scanned array radar that will allow B-52s to quickly scan vast Pacific Ocean areas. This radar will complement the Litening infrared targeting pod already used by B-52s for inspecting ships. In 2019, Boeing selected the Raytheon AN/APG-82(V)1 radar to replace its mechanically scanning AN/APQ-166 attack radar.
21st century service
In August 2007, a B-52H ferrying AGM-129 ACM cruise missiles from Minot Air Force Base to Barksdale Air Force Base for dismantling was mistakenly loaded with six missiles with their nuclear warheads. The weapons did not leave USAF custody and were secured at Barksdale.
Four of 18 B-52Hs from Barksdale Air Force Base were retired and were in the "boneyard" of 309th AMARG at Davis-Monthan Air Force Base as of 8 September 2008.
In February 2015, hull 61-0007 Ghost Rider became the first stored B-52 to return to service after six years in storage at Davis-Monthan Air Force Base.
In May 2019, a second aircraft was resurrected from long-term storage in Davis-Monthan. The B-52, nicknamed "Wise Guy", had been at AMARG since 2008. It flew to Barksdale Air Force Base on 13 May 2019. It was completed in four months by a team of 13–20 maintainers from the 307th Maintenance Squadron.
B-52s are periodically refurbished at USAF maintenance depots such as Tinker Air Force Base, Oklahoma. Even while the USAF works on the new Long Range Strike Bomber, it intends to keep the B-52H in service until 2050, which is 95 years after the B-52 first entered service, an unprecedented length of service for any aircraft, civilian or military.
The USAF continues to rely on the B-52 because it remains an effective and economical heavy bomber in the absence of sophisticated air defenses, particularly in the type of missions that have been conducted since the end of the Cold War against nations with limited defensive capabilities. The B-52 has also continued in service because there has been no reliable replacement. The B-52 has the capacity to "loiter" for extended periods, and can deliver precision standoff and direct fire munitions from a distance, in addition to direct bombing. It has been a valuable asset in supporting ground operations during conflicts such as Operation Iraqi Freedom. The B-52 had the highest mission capable rate of the three types of heavy bombers operated by the USAF in the 2000–2001 period. The B-1 averaged a 53.7% ready rate, the B-2 Spirit achieved 30.3%, and the B-52 averaged 80.5%. The B-52's cost per hour of flight is more than the B-1B's cost per hour, but less than the B-2's per hour.
The Long Range Strike Bomber program is intended to yield a stealthy successor for the B-52 and B-1 that would begin service in the 2020s; it is intended to produce 80 to 100 aircraft. Two competitors, Northrop Grumman and a joint team of Boeing and Lockheed Martin, submitted proposals in 2014; Northrop Grumman was awarded a contract in October 2015.
On 12 November 2015, the B-52 began freedom of navigation operations in the South China Sea in response to Chinese human-made islands in the region. Chinese forces, claiming jurisdiction within a 12-mile exclusion zone of the islands, ordered the bombers to leave the area, but they refused, not recognizing jurisdiction. On 10 January 2016, a B-52 overflew parts of South Korea escorted by South Korean F-15Ks and U.S. F-16s in response to the supposed test of a hydrogen bomb by North Korea.
On 9 April 2016, an undisclosed number of B-52s arrived at Al Udeid Air Base in Qatar as part of Operation Inherent Resolve, part of the military intervention against ISIL. The B-52s took over heavy bombing after B-1 Lancers that had been conducting airstrikes rotated out of the region in January 2016. In April 2016, B-52s arrived in Afghanistan to take part in the war in Afghanistan and began operations in July, proving its flexibility and precision carrying out close-air support missions.
According to a statement by the U.S. military, an undisclosed number of B-52s participated in the U.S. strikes on pro-government forces in eastern Syria on 7 February 2018.
A number of B-52s were deployed in airstrikes against the Taliban during the 2021 Taliban offensive.
In 2022, the US Air Force used a B-52 as a platform to test a Hypersonic Air-breathing Weapon Concept (HAWC) missile.
In late October 2022, ABC News reported that the USAF intended to deploy six B-52s at RAAF Tindal in Australia in the near future, which would include building facilities to handle the aircraft.
On 3 November 2024, CENTCOM confirmed an undisclosed number of B-52s from Minot Air Force Base's 5th Bomb Wing arrived in the Middle East.
On 8 December 2024, CENTCOM announced that B-52s, alongside undisclosed numbers of F-15E fighter aircraft and A-10 attack aircraft, had participated in a number of airstrikes against over seventy-five Islamic State targets within Syria, following the ousting of the al-Assad government in the country in the days prior.
Variants
The B-52 went through several design changes and variants over its 10 years of production.
XB-52
YB-52
B-52A
NB-52A
B-52B/RB-52B
NB-52B
B-52C
RB-52C
B-52D
B-52E
JB-52E
NB-52E
B-52F
B-52G
B-52H
B-52J
XR-16A
Operators
United States
United States Air Force operates 72 aircraft of the original 744 B-52 aircraft as of 2022.
Air Combat Command
53rd Wing – Eglin Air Force Base, Florida
49th Test and Evaluation Squadron (Barksdale)
57th Wing – Nellis Air Force Base, Nevada
340th Weapons Squadron (Barksdale)
Air Force Global Strike Command
2d Bomb Wing – Barksdale Air Force Base, Louisiana
11th Bomb Squadron
20th Bomb Squadron
96th Bomb Squadron
5th Bomb Wing – Minot Air Force Base, North Dakota
23d Bomb Squadron
69th Bomb Squadron
Air Force Materiel Command
412th Test Wing – Edwards Air Force Base, California
419th Flight Test Squadron
Air Force Reserve Command
307th Bomb Wing – Barksdale Air Force Base, Louisiana
93d Bomb Squadron
343d Bomb Squadron
NASA
Dryden Flight Research Center
1 modified ex-USAF NB-52B (52-8) "Mothership" Launch Aircraft operated from 1966 to 2004. It was then put on display at the North entrance to Edwards Air Force Base.
1 modified ex-USAF B-52H (61-25) Heavy Lift Launch Aircraft operated from 2001 to 2008. On 9 May 2008, that aircraft was flown for the last time to Sheppard Air Force Base, Texas, where it became a GB-52H maintenance trainer, never to fly again.
Notable accidents
List of incidents resulting in loss of life, severe injuries, or loss of aircraft.
In 1956, there were three crashes in eight months, all at Castle Air Force Base.
The fourth crash occurred 42 days later on 10 January 1957 in New Brunswick, Canada.
On 29 March 1957, B-52C (54-2676) retained by Boeing and used for tests as JB-52C, crashed during Boeing test flight from Wichita, Kansas. Two of the four crew on board were killed.
On 11 February 1958, B-52D (56-0610) crashed short of the runway at Ellsworth AFB, South Dakota, due to total loss of power during final approach. Two of the eight crewmembers on board were killed in addition to three ground personnel. The crash was determined to be from frozen fuel lines that clogged fuel filters. It was previously unknown that jet fuel absorbs water vapor from the atmosphere. After this accident, over two hundred previous aircraft losses listed as "cause unknown" were attributed to frozen fuel lines.
On 8 September 1958, two B-52Ds (56‑0661 and 56‑0681) from the 92d Bombardment Wing collided in midair near Fairchild AFB. All thirteen crew members on the two aircraft were killed.
On 23 June 1959, B-52D (56‑0591), nicknamed "Tommy's Tigator", operating out of Larson AFB, crashed in the Ochoco National Forest near Burns, Oregon. The aircraft was operated by Boeing personnel during a test flight and crashed after turbulence-induced failure in the horizontal stabilizer at a low elevation. All five Boeing personnel were killed.
On 15 October 1959, B-52F (57‑0036) from the 4228th Strategic Wing at Columbus AFB, Mississippi, carrying two nuclear weapons collided in midair with a KC-135 tanker (57-1513) near Hardinsburg, Kentucky during a mid-air refueling. Four of the eight crew members on the bomber and all four crew on the tanker were killed. One of the nuclear bombs was damaged by fire, but both weapons were recovered.
On 15 December 1960, B-52D (55‑0098) from the 4170th Strategic Wing collided with a KC-135 during mid-air refueling. The refueling probe from the KC-135 pierced the skin on the wing of the B-52. Upon landing at Larson AFB, the starboard wing failed, and the aircraft caught fire during the landing roll. The runway at Larson was damaged. All crew members were evacuated. The KC-135 landed at Fairchild AFB.
On 19 January 1961, B-52B (53‑0390), call sign "Felon 22", from the 95th Bombardment Wing out of Biggs AFB, El Paso, Texas crashed just north of Monticello, Utah after a turbulence-induced structural failure, the tail snapped off, at altitude. Only the copilot survived after ejecting. The other seven crewmen died.
On 24 January 1961, B-52G (58‑0187) from the 4241st Strategic Wing broke up in midair and crashed on approach to Seymour Johnson AFB near Goldsboro, North Carolina, dropping two nuclear bombs in the process without detonation. The aircraft suffered a fuel leak at altitude due to fatigue failure of the starboard wing. A loss of control resulted when the flaps were applied during the emergency approach to Seymour Johnson AFB. Three of the eight crew members were killed.
On 14 March 1961, B-52F (57‑0166) of the 4134th Strategic Wing operating out of Mather AFB, California, carrying two nuclear weapons experienced an uncontrolled decompression, necessitating a descent to to lower the cabin altitude. Due to increased fuel consumption at the lower altitude and being unable to rendezvous with a tanker in time, the aircraft ran out of fuel. The crew ejected safely, while the now-unmanned bomber crashed west of Yuba City, California.
On 7 April 1961, B-52B (53‑0380), nicknamed "Ciudad Juarez", from the 95th Bombardment Wing out of Biggs AFB was accidentally shot down by the launch of a AIM-9 Sidewinder from a F-100A Super Sabre (53-1662) of the New Mexico Air National Guard during a practice intercept maneuver. The missile struck the engine pylon on the B-52 resulting in separation of the wing. The aircraft crashed on Mount Taylor, New Mexico with three of the eight crew on board killed. A firing circuit electrical fault caused the inadvertent launch of the missile.
On 24 January 1963, B-52C (53-0406) with nine crew members on board lost its vertical stabilizer due to buffeting stresses during turbulence at low altitude and crashed on Elephant Mountain in Piscataquis County, Maine, United States, six miles (9.7 km) from Greenville. Of the 9 man crew, only the pilot and the navigator survived the accident.
On 13 January 1964, the vertical stabilizer broke off B-52D (55‑0060), callsign "Buzz 14", causing a crash on Savage Mountain in western Maryland. Excessive turbulence resulted in structural failure in a winter storm. The two MK53 nuclear bombs being ferried were found "relatively intact". Four of the crew of five ejected but two of them died due to exposure from the winter cold.
On 18 June 1965, two B-52Fs (57‑0047 and 57‑0179) collided mid-air during a refueling maneuver at above the South China Sea. The head-on collision took place just northwest of the Luzon Peninsula, Philippines, in the night sky above Super Typhoon Dinah, a category 5 storm with maximum winds of and waves reported as high as . Both aircraft were from the same squadron (441st Bombardment Squadron) of the 7th Bombardment Wing, Carswell AFB, Texas and assigned to 3960th Strategic Wing operating out of Andersen AFB, Guam. Eight of twelve total crew members in two planes were killed. The rescue of four crew members who had managed to eject only to parachute into one of the largest typhoons of the 20th century remains one of the most remarkable survival stories in the history of aviation. The crash was the first combat mission ever for the B-52. The two jets were part of a 30-plane deployment on an inaugural Operation Arc Light mission to a military target about northwest of Saigon, South Vietnam.
On 17 January 1966, a fatal collision occurred between a B-52G (58‑0256) from 68th Bombardment Wing out of Seymour Johnson AFB and a KC-135 Stratotanker (61-0273) over Palomares, Almería, Spain, killing all four on the tanker and three of the seven on the B-52G. The two unexploded B-28 FI 1.45-megaton-range nuclear bombs on the B-52 were eventually recovered; the conventional explosives of two more bombs detonated on impact, with serious dispersion of both plutonium and uranium, but without triggering a nuclear explosion. After the crash, of contaminated soil was sent to the United States. In 2006, an agreement was made between the United States and Spain to investigate and clean the pollution still remaining as a result of the accident.
On 16 October 1984, a B-52G out of Fairchild AFB, Spokane, Washington, crashed on Hunts Mesa, in the Monument Valley Navajo Tribal Park. Five of the seven crew members were able to eject and survived the crash. Sergeant David Felix and Colonel William Ivy were killed.
On 24 June 1994, B-52H Czar 52, 61–0026 crashed at Fairchild AFB, Washington, during practice for an airshow. All four crew members died in the accident.
On 21 July 2008, a B-52H, Raidr 21, 60–0053, deployed from Barksdale AFB, Louisiana, to Andersen AFB, Guam, crashed approximately off the coast of Guam. All six crew members were killed (five standard crew members and a flight surgeon).
Aircraft on display
Specifications (B-52H)
Notable appearances in media
A B-52 carrying nuclear weapons was a key part of Stanley Kubrick's 1964 black comedy film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. A 1960s hairstyle, the beehive, is also called a B-52 for its resemblance to the aircraft's distinctive nose. The popular band the B-52's was subsequently named after this hairstyle.
| Technology | Specific aircraft | null |
18933196 | https://en.wikipedia.org/wiki/Bismuth | Bismuth | Bismuth is a chemical element with the symbol Bi and atomic number 83. It is a post-transition metal and one of the pnictogens, with chemical properties resembling its lighter group 15 siblings arsenic and antimony. Elemental bismuth occurs naturally, and its sulfide and oxide forms are important commercial ores. The free element is 86% as dense as lead. It is a brittle metal with a silvery-white color when freshly produced. Surface oxidation generally gives samples of the metal a somewhat rosy cast. Further oxidation under heat can give bismuth a vividly iridescent appearance due to thin-film interference. Bismuth is both the most diamagnetic element and one of the least thermally conductive metals known.
Bismuth used to be considered the element with the highest atomic mass whose nuclei do not spontaneously decay. However, in 2003 it was discovered to be extremely weakly radioactive. The metal's only primordial isotope, bismuth-209, undergoes alpha decay with a half-life about a billion times the estimated age of the universe.
Bismuth metal has been known since ancient times. Before modern analytical methods bismuth's metallurgical similarities to lead and tin often led it to be confused with those metals. The etymology of "bismuth" is uncertain. The name may come from mid-sixteenth century Neo-Latin translations of the German words or , meaning 'white mass', which were rendered as or .
Bismuth compounds account for about half the global production of bismuth. They are used in cosmetics; pigments; and a few pharmaceuticals, notably bismuth subsalicylate, used to treat diarrhea. Bismuth's unusual propensity to expand as it solidifies is responsible for some of its uses, as in the casting of printing type. Bismuth, when in its elemental form, has unusually low toxicity for a heavy metal. As the toxicity of lead and the cost of its environmental remediation became more apparent during the 20th century, suitable bismuth alloys have gained popularity as replacements for lead. Presently, around a third of global bismuth production is dedicated to needs formerly met by lead.
History and etymology
Bismuth metal has been known since ancient times and it was one of the first 10 metals to have been discovered. The name bismuth dates to around 1665 and is of uncertain etymology. The name possibly comes from obsolete German , , (early 16th century), perhaps related to Old High German ("white"). The Neo-Latin (coined by Georgius Agricola, who Latinized many German mining and technical words) is from the German , itself perhaps from , meaning "white mass".
The element was confused in early times with tin and lead because of its resemblance to those elements. Because bismuth has been known since ancient times, no one person is credited with its discovery. Agricola (1546) states that bismuth is a distinct metal in a family of metals including tin and lead. This was based on observation of the metals and their physical properties.
Miners in the age of alchemy also gave bismuth the name , or "silver being made" in the sense of silver still in the process of being formed within the Earth.
Bismuth was also known to the Incas and used (along with the usual copper and tin) in a special bronze alloy for knives.
Beginning with Johann Heinrich Pott in 1738, Carl Wilhelm Scheele, and Torbern Olof Bergman, the distinctness of lead and bismuth became clear, and Claude François Geoffroy demonstrated in 1753 that this metal is distinct from lead and tin.
Characteristics
Physical characteristics
Bismuth is a brittle metal with a dark, silver-pink hue, often with an iridescent oxide tarnish showing many colors from yellow to blue. The spiral, stair-stepped structure of bismuth crystals is the result of a higher growth rate around the outside edges than on the inside edges. The variations in the thickness of the oxide layer that forms on the surface of the crystal cause different wavelengths of light to interfere upon reflection, thus displaying a rainbow of colors. When burned in oxygen, bismuth burns with a blue flame and its oxide forms yellow fumes. Its toxicity is much lower than that of its neighbors in the periodic table, such as lead and antimony.
No other metal is verified to be more naturally diamagnetic than bismuth. (Superdiamagnetism is a different physical phenomenon.) Of any metal, it has one of the lowest values of thermal conductivity (after manganese, neptunium and plutonium) and the highest Hall coefficient. It has a high electrical resistivity. When deposited in sufficiently thin layers on a substrate, bismuth is a semiconductor, despite being a post-transition metal. Elemental bismuth is denser in the liquid phase than the solid, a characteristic it shares with germanium, silicon, gallium, and water. Bismuth expands 3.32% on solidification; therefore, it was long a component of low-melting typesetting alloys, where it compensated for the contraction of the other alloying components to form almost isostatic bismuth-lead eutectic alloys.
Though virtually unseen in nature, high-purity bismuth can form distinctive, colorful hopper crystals. It is relatively nontoxic and has a low melting point just above , so crystals may be grown using a household stove, although the resulting crystals will tend to be of lower quality than lab-grown crystals.
At ambient conditions, bismuth shares the same layered structure as the metallic forms of arsenic and antimony, crystallizing in the rhombohedral lattice. When compressed at room temperature, this Bi–I structure changes first to the monoclinic Bi-II at 2.55 GPa, then to the tetragonal Bi-III at 2.7 GPa, and finally to the body-centered cubic Bi-V at 7.7 GPa. The corresponding transitions can be monitored via changes in electrical conductivity; they are rather reproducible and abrupt and are therefore used for calibration of high-pressure equipment.
Chemical characteristics
Bismuth is stable to both dry and moist air at ordinary temperatures. When red-hot, it reacts with water to make bismuth(III) oxide.
It reacts with fluorine to form bismuth(V) fluoride at or bismuth(III) fluoride at lower temperatures (typically from Bi melts); with other halogens it yields only bismuth(III) halides. The trihalides are corrosive and easily react with moisture, forming oxyhalides with the formula BiOX.
(X = F, Cl, Br, I)
Bismuth dissolves in concentrated sulfuric acid to make bismuth(III) sulfate and sulfur dioxide.
It reacts with nitric acid to make bismuth(III) nitrate (which decomposes into nitrogen dioxide when heated).
It also dissolves in hydrochloric acid, but only with oxygen present.
Isotopes
The only primordial isotope of bismuth, bismuth-209, was regarded as the heaviest stable nuclide, but it had long been suspected to be unstable on theoretical grounds. This was finally demonstrated in 2003, when researchers at the Institut d'astrophysique spatiale in Orsay, France, measured the alpha (α) decay half-life of Bi to be (3 Bq/Mg), over times longer than the estimated age of the universe. Due to its hugely long half-life, for all known medical and industrial applications, bismuth can be treated as stable. The radioactivity is of academic interest because bismuth is one of a few elements whose radioactivity was suspected and theoretically predicted before being detected in the laboratory. Bismuth has the longest known α-decay half-life, though tellurium-128 has a double beta decay half-life of over .
Six isotopes of bismuth with short half-lives (210–215 inclusive) occur in the natural radioactive decay chains of actinium, radium, thorium, and neptunium; and more have been synthesized. (Though all primordial Np has long since decayed, it is continually regenerated by (n,2n) knockout reactions on natural U.)
Commercially, bismuth-213 can be produced by bombarding radium with bremsstrahlung photons from a linear particle accelerator. In 1997, an antibody conjugate with bismuth-213, which has a 45-minute half-life and α-decays, was used to treat leukemia patients. This isotope has also been tried in cancer treatment, for example, in the targeted alpha therapy (TAT) program.
Chemical compounds
Chemically, bismuth resembles arsenic and antimony, but is much less toxic. In almost all known compounds, bismuth has oxidation state +3; a few have states +5 or −3.
The trioxide and trisulfide can both be made from the elements, although the trioxide is extremely corrosive at high temperatures. The pentoxide is not stable at room temperature, and will evolve gas if heated. Both oxides form complex anions, and NaBiO3 is a strong oxidising agent. The trisulfide is common in bismuth ore.
Similarly, bismuth forms all possible trihalides, but the only pentahalide is BiF5. All are Lewis acids. Bismuth forms several formally-BiI halides; these are complex salts with unusually-structured polyatomic cations and anions.
In strongly acidic aqueous solution, the Bi ion solvates to form . As pH increases, the cations polymerize until the octahedral bismuthyl complex , often abbreviated BiO+. Although bismuth oxychloride and bismuth oxynitrate have stoichiometries suggesting the ion, they are double salts instead. Bismuth nitrate (not oxynitrate) is one of the few aqueous-insoluble nitrate salts.
Bismuth forms very few stable bismuthides, intermetallic compounds in which it attains oxidation state −3. The hydride spontaneously decomposes at room temperature and stabilizes only below . Sodium bismuthide has interest as a topological Dirac insulator.
Occurrence and production
The reported abundance of bismuth in the Earth's crust varies significantly by source from 180ppb (similar to that of silver) to 8ppb (twice as common as gold). The most important ores of bismuth are bismuthinite and bismite. Native bismuth is known from Australia, Bolivia, and China.
According to the United States Geological Survey (USGS), 10,200 tonnes of bismuth were produced worldwide by mining and 17,100 tonnes by refining in 2016. Since then, USGS does not provide mining data for bismuth, considering them unreliable. Globally, bismuth is mostly produced by refining, as a byproduct of extraction of other metals such as lead, copper, tin, molybdenum and tungsten, though the refining-to-mining ratio depends on the country.
Bismuth travels in crude lead bullion (which can contain up to 10% bismuth) through several stages of refining, until it is removed by the Kroll-Betterton process which separates the impurities as slag, or the electrolytic Betts process. Bismuth will behave similarly with another of its major metals, copper. The raw bismuth metal from both processes contains still considerable amounts of other metals, foremost lead. By reacting the molten mixture with chlorine gas the metals are converted to their chlorides while bismuth remains unchanged. Impurities can also be removed by various other methods for example with fluxes and treatments yielding high-purity bismuth metal (over 99% Bi).
Price
The price for pure bismuth metal was relatively stable through most of the 20th century, except for a spike in the 1970s. Bismuth has always been produced mainly as a byproduct of lead refining, and thus the price usually reflected the cost of recovery and the balance between production and demand.
Before World War II, demand for bismuth was small and mainly pharmaceutical—bismuth compounds were used to treat such conditions as digestive disorders, sexually transmitted diseases and burns. Minor amounts of bismuth metal were consumed in fusible alloys for fire sprinkler systems and fuse wire. During World War II bismuth was considered a strategic material, used for solders, fusible alloys, medications and atomic research. To stabilize the market, the producers set the price at $1.25 per pound ($2.75 /kg) during the war and at $2.25 per pound ($4.96 /kg) from 1950 until 1964.
In the early 1970s, the price rose rapidly due to increasing demand for bismuth as a metallurgical additive to aluminium, iron and steel. This was followed by a decline owing to increased world production, stabilized consumption, and the recessions of 1980 and 1981–1982. In 1984, the price began to climb as consumption increased worldwide, especially in the United States and Japan. In the early 1990s, research began on the evaluation of bismuth as a nontoxic replacement for lead in ceramic glazes, fishing sinkers, food-processing equipment, free-machining brasses for plumbing applications, lubricating greases, and shot for waterfowl hunting. Growth in these areas remained slow during the middle 1990s, in spite of the backing of lead replacement by the United States federal government, but intensified around 2005. This resulted in a rapid and continuing increase in price.
Recycling
Most bismuth is produced as a byproduct of other metal-extraction processes including the smelting of lead, and also of tungsten and copper. Its sustainability is dependent on increased recycling, which is problematic.
It was once believed that bismuth could be practically recycled from the soldered joints in electronic equipment. Recent efficiencies in solder application in electronics mean there is substantially less solder deposited, and thus less to recycle. While recovering the silver from silver-bearing solder may remain economic, recovering bismuth is substantially less so.
Dispersed bismuth is used in certain stomach medicines (bismuth subsalicylate), paints (bismuth vanadate), pearlescent cosmetics (bismuth oxychloride), and bismuth-containing bullets. Recycling bismuth from these uses is impractical.
Applications
Bismuth has few commercial applications, and those applications that use it generally require small quantities relative to other raw materials. In the United States, for example, 733 tonnes of bismuth were consumed in 2016, of which 70% went into chemicals (including pharmaceuticals, pigments, and cosmetics) and 11% into bismuth alloys.
In the early 1990s, researchers began to evaluate bismuth as a nontoxic replacement for lead in various applications.
Medicines
Bismuth is an ingredient in some pharmaceuticals, although the use of some of these substances is declining.
Bismuth subsalicylate is used to treat diarrhea; it is the active ingredient in such "pink bismuth" preparations as Pepto-Bismol, as well as the 2004 reformulation of Kaopectate. It is also used to treat some other gastro-intestinal diseases like shigellosis and cadmium poisoning. The mechanism of action of this substance is still not well documented, although an oligodynamic effect (toxic effect of small doses of heavy metal ions on microbes) may be involved in at least some cases. Salicylic acid from hydrolysis of the compound is antimicrobial for toxogenic E. coli, an important pathogen in traveler's diarrhea.
A combination of bismuth subsalicylate and bismuth subcitrate is used to treat the bacteria causing peptic ulcers.
Bibrocathol is an organic bismuth-containing compound used to treat eye infections.
Bismuth subgallate, the active ingredient in Devrom, is used as an internal deodorant to treat malodor from flatulence and feces.
Bismuth compounds (including sodium bismuth tartrate) were formerly used to treat syphilis. Arsenic combined with either bismuth or mercury was a mainstay of syphilis treatment from the 1920s until the advent of penicillin in 1943.
"Milk of bismuth" (an aqueous suspension of bismuth hydroxide and bismuth subcarbonate) was marketed as an alimentary cure-all in the early 20th century, and has been used to treat gastrointestinal disorders.
Bismuth subnitrate (Bi5O(OH)9(NO3)4) and bismuth subcarbonate (Bi2O2(CO3)) are also used in medicine.
Cosmetics and pigments
Bismuth oxychloride (BiOCl) is sometimes used in cosmetics, as a pigment in paint for eye shadows, hair sprays and nail polishes. This compound is found as the mineral bismoclite and in crystal form contains layers of atoms (see figure above) that refract light chromatically, resulting in an iridescent appearance similar to nacre of pearl. It was used as a cosmetic in ancient Egypt and in many places since. Bismuth white (also "Spanish white") can refer to either bismuth oxychloride or bismuth oxynitrate (BiONO3), when used as a white pigment. Bismuth vanadate is used as a light-stable non-reactive paint pigment (particularly for artists' paints), often as a replacement for the more toxic cadmium sulfide yellow and orange-yellow pigments. The most common variety in artists' paints is a lemon yellow, visually indistinguishable from its cadmium-containing alternative.
Metal and alloys
Bismuth is used in alloys with other metals such as tin and lead. Wood's metal, an alloy of bismuth, lead, tin, and cadmium is used in automatic sprinkler systems for fires. It forms the largest part (50%) of Rose's metal, a fusible alloy, which also contains 25–28% lead and 22–25% tin. It was also used to make bismuth bronze, which was used during the Bronze Age, having been found in Inca knives at Machu Picchu.
Lead replacement
The density difference between lead (11.32 g/cm3) and bismuth (9.78 g/cm3) is small enough that for many ballistics and weighting applications, bismuth can substitute for lead. For example, it can replace lead as a dense material in fishing sinkers. It has been used as a replacement for lead in shot, bullets and less-lethal riot gun ammunition. The Netherlands, Denmark, England, Wales, the United States, and many other countries now prohibit the use of lead shot for the hunting of wetland birds, as many birds are prone to lead poisoning owing to mistaken ingestion of lead (instead of small stones and grit) to aid digestion, or even prohibit the use of lead for all hunting, such as in the Netherlands. Bismuth-tin alloy shot is one alternative that provides similar ballistic performance to lead.
Bismuth, as a dense element of high atomic weight, is used in bismuth-impregnated latex shields to shield from X-ray in medical examinations, such as CTs, mostly as it is considered non-toxic.
The European Union's Restriction of Hazardous Substances Directive (RoHS) for reduction of lead has broadened bismuth's use in electronics as a component of low-melting point solders, as a replacement for traditional tin-lead solders. Its low toxicity will be especially important for solders to be used in food processing equipment and copper water pipes, although it can also be used in other applications including those in the automobile industry, in the European Union, for example.
Bismuth has been evaluated as a replacement for lead in free-machining brasses for plumbing applications, although it does not equal the performance of leaded steels.
Other metal uses and specialty alloys
Many bismuth alloys have low melting points and are found in specialty applications such as solders. Many automatic sprinklers, electric fuses, and safety devices in fire detection and suppression systems contain the eutectic In19.1-Cd5.3-Pb22.6-Sn8.3-Bi44.7 alloy that melts at This is a convenient temperature since it is unlikely to be exceeded in normal living conditions. Low-melting alloys, such as Bi-Cd-Pb-Sn alloy which melts at , are also used in automotive and aviation industries. Before deforming a thin-walled metal part, it is filled with a melt or covered with a thin layer of the alloy to reduce the chance of breaking. Then the alloy is removed by submerging the part in boiling water.
Bismuth is used to make free-machining steels and free-machining aluminium alloys for precision machining properties. It has similar effect to lead and improves the chip breaking during machining. The shrinking on solidification in lead and the expansion of bismuth compensate each other and therefore lead and bismuth are often used in similar quantities. Similarly, alloys containing comparable parts of bismuth and lead exhibit a very small change (on the order 0.01%) upon melting, solidification or aging. Such alloys are used in high-precision casting, e.g. in dentistry, to create models and molds. Bismuth is also used as an alloying agent in production of malleable irons and as a thermocouple material.
Bismuth is also used in aluminium-silicon cast alloys to refine silicon morphology. However, it indicated a poisoning effect on modification of strontium. Some bismuth alloys, such as Bi35-Pb37-Sn25, are combined with non-sticking materials such as mica, glass and enamels because they easily wet them allowing to make joints to other parts. Addition of bismuth to caesium enhances the quantum yield of caesium cathodes. Sintering of bismuth and manganese powders at produces a permanent magnet and magnetostrictive material, which is used in ultrasonic generators and receivers working in the 10–100 kHz range and in magnetic and holographic memory devices.
Other uses as compounds
Bismuth is included in BSCCO (bismuth strontium calcium copper oxide), which is a group of similar superconducting compounds discovered in 1988 that exhibit the highest superconducting transition temperatures.
Bismuth telluride is a semiconductor and an excellent thermoelectric material. Bi2Te3 diodes are used in mobile refrigerators, CPU coolers, and as detectors in infrared spectrophotometers.
Bismuth oxide, in its delta form, is a solid electrolyte for oxygen. This form normally breaks down below a high-temperature threshold, but can be electrodeposited well below this temperature in a highly alkaline solution.
Bismuth germanate is a scintillator, widely used in X-ray and gamma ray detectors.
Bismuth vanadate is an opaque yellow pigment used by some artists' oil, acrylic, and watercolor paint companies, primarily as a replacement for the more toxic cadmium sulfide yellows in the greenish-yellow (lemon) to orange-toned yellow range. It performs practically identically to the cadmium pigments, such as in terms of resistance to degradation from UV exposure, opacity, tinting strength, and lack of reactivity when mixed with other pigments. The most commonly-used variety by artists' paint makers is lemon in color. In addition to being a replacement for several cadmium yellows, it also serves as a non-toxic visual replacement for the older chromate pigments made with zinc, lead, and strontium. If a green pigment and barium sulfate (for increased transparency) are added it can also serve as a replacement for barium chromate, which possesses a more greenish cast than the others. In comparison with lead chromate, it does not blacken due to hydrogen sulfide in the air (a process accelerated by UV exposure) and possesses a particularly brighter color than them, especially the lemon, which is the most translucent, dull, and fastest to blacken due to the higher percentage of lead sulfate required to produce that shade. It is also used, on a limited basis due to its cost, as a vehicle paint pigment.
A catalyst for making acrylic fibers.
As an electrocatalyst in the conversion of CO2 to CO.
Ingredient in lubricating greases.
In crackling microstars (dragon's eggs) in pyrotechnics, as the oxide, subcarbonate or subnitrate.
As catalyst for the fluorination of arylboronic pinacol esters through a Bi(III)/Bi(V) catalytic cycle, mimicking transition metals in electrophilic fluorination.
Toxicology and ecotoxicology
| Physical sciences | Chemical elements_2 | null |
18933234 | https://en.wikipedia.org/wiki/Emacs | Emacs | Emacs (), originally named EMACS (an acronym for "Editor Macros"), is a family of text editors that are characterized by their extensibility. The manual for the most widely used variant, GNU Emacs, describes it as "the extensible, customizable, self-documenting, real-time display editor". Development of the first Emacs began in the mid-1970s, and work on GNU Emacs, directly descended from the original, is ongoing; its latest version is , released .
Emacs has over 10,000 built-in commands and its user interface allows the user to combine these commands into macros to automate work. Implementations of Emacs typically feature a dialect of the Lisp programming language, allowing users and developers to write new commands and applications for the editor. Extensions have been written to, among other things, manage files, remote access, e-mail, outlines, multimedia, Git integration, RSS feeds, and collaborative editing, as well as implementations of ELIZA, Pong, Conway's Life, Snake, Dunnet, and Tetris.
The original EMACS was written in 1976 by David A. Moon and Guy L. Steele Jr. as a set of macros for the TECO editor. It was inspired by the ideas of the TECO-macro editors TECMAC and TMACS.
The most popular, and most ported, version of Emacs is GNU Emacs, which was created by Richard Stallman for the GNU Project. XEmacs is a variant that branched from GNU Emacs in 1991. GNU Emacs and XEmacs use similar Lisp dialects and are, for the most part, compatible with each other. XEmacs development is inactive.
GNU Emacs is, along with vi, one of the two main contenders in the traditional editor wars of Unix culture. GNU Emacs is among the oldest free and open source projects still under development.
History
Emacs development began during the 1970s at the MIT AI Lab, whose PDP-6 and PDP-10 computers used the Incompatible Timesharing System (ITS) operating system that featured a default line editor known as Tape Editor and Corrector (TECO). Unlike most modern text editors, TECO used separate modes in which the user would either add text, edit existing text, or display the document. One could not place characters directly into a document by typing them into TECO, but would instead enter a character ('i') in the TECO command language telling it to switch to input mode, enter the required characters, during which time the edited text was not displayed on the screen, and finally enter a character (<esc>) to switch the editor back to command mode. (A similar technique was used to allow overtyping.) This behavior is similar to that of the program ed.
By the 1970s, TECO was already an old program, initially released in 1962. Richard Stallman visited the Stanford AI Lab in 1976 and saw the lab's E editor, written by Fred Wright. He was impressed by the editor's intuitive WYSIWYG (What You See Is What You Get) behavior, which has since become the default behavior of most modern text editors. He returned to MIT where Carl Mikkelsen, a hacker at the AI Lab, had added to TECO a combined display/editing mode called Control-R that allowed the screen display to be updated each time the user entered a keystroke. Stallman reimplemented this mode to run efficiently and then added a macro feature to the TECO display-editing mode that allowed the user to redefine any keystroke to run a TECO program.
E had another feature that TECO lacked: random-access editing. TECO was a page-sequential editor that was designed for editing paper tape on the PDP-1 at a time when computer memory was generally small due to cost, and it was a feature of TECO that allowed editing on only one page at a time sequentially in the order of the pages in the file. Instead of adopting E's approach of structuring the file for page-random access on disk, Stallman modified TECO to handle large buffers more efficiently and changed its file-management method to read, edit, and write the entire file as a single buffer. Almost all modern editors use this approach.
The new version of TECO quickly became popular at the AI Lab and soon accumulated a large collection of custom macros whose names often ended in MAC or MACS, which stood for macro. Two years later, Guy Steele took on the project of unifying the diverse macros into a single set. Steele and Stallman's finished implementation included facilities for extending and documenting the new macro set. The resulting system was called EMACS, which stood for Editing MACroS or, alternatively, E with MACroS. Stallman picked the name Emacs "because <E> was not in use as an abbreviation on ITS at the time." An apocryphal hacker koan alleges that the program was named after Emack & Bolio's, a popular Boston ice cream store. The first operational EMACS system existed in late 1976.
Stallman saw a problem in too much customization and de facto forking and set certain conditions for usage. He later wrote:
The original Emacs, like TECO, ran only on the PDP-10 running ITS. Its behavior was sufficiently different from that of TECO that it could be considered a text editor in its own right, and it quickly became the standard editing program on ITS. Mike McMahon ported Emacs from ITS to the TENEX and TOPS-20 operating systems. Other contributors to early versions of Emacs include Kent Pitman, Earl Killian, and Eugene Ciccarelli. By 1979, Emacs was the main editor used in MIT's AI lab and its Laboratory for Computer Science.
Implementations
Early implementations
In the following years, programmers wrote a variety of Emacs-like editors for other computer systems. These included EINE (EINE Is Not EMACS) and ZWEI (ZWEI Was EINE Initially), which were written for the Lisp machine by Mike McMahon and Daniel Weinreb, and Sine (Sine Is Not Eine), which was written by Owen Theodore Anderson. Weinreb's EINE was the first Emacs written in Lisp. In 1978, Bernard Greenberg wrote Multics Emacs almost entirely in Multics Lisp at Honeywell's Cambridge Information Systems Lab. Multics Emacs was later maintained by Richard Soley, who went on to develop the NILE Emacs-like editor for the NIL Project, and by Barry Margolin. Many versions of Emacs, including GNU Emacs, would later adopt Lisp as an extension language.
James Gosling, who would later invent NeWS and the Java programming language, wrote Gosling Emacs in 1981. The first Emacs-like editor to run on Unix, Gosling Emacs was written in C and used Mocklisp, a language with Lisp-like syntax, as an extension language.
Early Ads for Computer Corporation of America's CCA EMACS (Steve Zimmerman) appeared in 1984. 1985 comparisons to GNU Emacs, when it came out, mentioned free vs. $2,400.
GNU Emacs
Richard Stallman began work on GNU Emacs in 1984 to produce a free software alternative to the proprietary Gosling Emacs. GNU Emacs was initially based on Gosling Emacs, but Stallman's replacement of its Mocklisp interpreter with a true Lisp interpreter required that nearly all of its code be rewritten. This became the first program released by the nascent GNU Project. GNU Emacs is written in C and provides Emacs Lisp, also implemented in C, as an extension language. Version 13, the first public release, was made on March 20, 1985. The first widely distributed version of GNU Emacs was version 15.34, released later in 1985. Early versions of GNU Emacs were numbered as 1.x.x, with the initial digit denoting the version of the C core. The 1 was dropped after version 1.12, as it was thought that the major number would never change, and thus the numbering skipped from 1 to 13. In September 2014, it was announced on the GNU emacs-devel mailing list that GNU Emacs would adopt a rapid release strategy and version numbers would increment more quickly in the future.
GNU Emacs offered more features than Gosling Emacs, in particular a full-featured Lisp as its extension language, and soon replaced Gosling Emacs as the de facto Unix Emacs editor. Markus Hess exploited a security flaw in GNU Emacs' email subsystem in his 1986 cracking spree in which he gained superuser access to Unix computers.
Most of GNU Emacs functionality is implemented through a scripting language called Emacs Lisp. Because about 70% of GNU Emacs is written in the Emacs Lisp extension language, one only needs to port the C core which implements the Emacs Lisp interpreter. This makes porting Emacs to a new platform considerably less difficult than porting an equivalent project consisting of native code only.
GNU Emacs development was relatively closed until 1999 and was used as an example of the Cathedral development style in The Cathedral and the Bazaar. The project has since adopted a public development mailing list and anonymous CVS access. Development took place in a single CVS trunk until 2008 and was then switched to the Bazaar DVCS. On November 11, 2014, development was moved to Git.
Richard Stallman has remained the principal maintainer of GNU Emacs, but he has stepped back from the role at times. Stefan Monnier and Chong Yidong were maintainers from 2008 to 2015. John Wiegley was named maintainer in 2015 after a meeting with Stallman at MIT. As of early 2014, GNU Emacs has had 579 individual committers throughout its history.
XEmacs
Lucid Emacs, based on an early alpha version of GNU Emacs 19, was developed beginning in 1991 by Jamie Zawinski and others at Lucid Inc. One of the best-known early forks in free software development occurred when the codebases of the two Emacs versions diverged and the separate development teams ceased efforts to merge them back into a single program. Lucid Emacs has since been renamed XEmacs. Its development is currently inactive, with the most recent stable version 21.4.22 released in January 2009 (while a beta was released in 2013), while GNU Emacs has implemented many formerly XEmacs-only features.
Other forks of GNU Emacs
Other notable forks include:
Aquamacs – based on GNU Emacs (Aquamacs 3.2 is based on GNU Emacs version 24 and Aquamacs 3.3 is based on GNU Emacs version 25) which focuses on integrating with the Apple Macintosh user interface
Meadow – a Japanese version for Microsoft Windows
Various Emacs editors
In the past, projects aimed at producing small versions of Emacs proliferated. GNU Emacs was initially targeted at computers with a 32-bit flat address space and at least 1 MiB of RAM. Such computers were high end workstations and minicomputers in the 1980s, and this left a need for smaller reimplementations that would run on common personal computer hardware. Today's computers have more than enough power and capacity to eliminate these restrictions, but small clones have more recently been designed to fit on software installation disks or for use on less capable hardware.
Other projects aim to implement Emacs in a different dialect of Lisp or a different programming language altogether. Although not all are still actively maintained, these clones include:
MicroEMACS, which was originally written by Dave Conroy and further developed by Daniel Lawrence and which exists in many variations.
mg, originally called MicroGNUEmacs and, later, mg2a, a public-domain offshoot of MicroEMACS intended to more closely resemble GNU Emacs. Now installed by default on OpenBSD.
JOVE (Jonathan's Own Version of Emacs), Jonathan Payne's non-programmable Emacs implementation for UNIX-like systems.
MINCE (MINCE Is Not Complete Emacs), a version for CP/M and later DOS, from Mark of the Unicorn. MINCE evolved into Final Word, which eventually became the Borland Sprint word processor.
Perfect Writer, a CP/M implementation derived from MINCE that was included circa 1982 as the default word processor with the very earliest releases of the Kaypro II and Kaypro IV. It was later provided with the Kaypro 10 as an alternative to WordStar.
Freemacs, a DOS version that uses an extension language based on text macro expansion and fits within the original 64 KiB flat memory limit.
Zmacs, for the MIT Lisp Machine and its descendants, implemented in ZetaLisp.
Epsilon, an Emacs clone by Lugaru Software. Versions for DOS, Windows, Linux, FreeBSD, Mac OS X and OS/2 are bundled in the release. It uses a non-Lisp extension language with C syntax and used a very early concurrent command shell buffer implementation under the single-tasking MS-DOS.
PceEmacs is the Emacs-based editor for SWI-Prolog.
Hemlock, originally written in Spice Lisp, then Common Lisp. A part of CMU Common Lisp. Influenced by Zmacs. Later forked by Lucid Common Lisp (as Helix), LispWorks and Clozure CL projects. There is also a Portable Hemlock project, which aims to provide a Hemlock, which runs on several Common Lisp implementations.
edwin, an Emacs-like text editor included with MIT/GNU Scheme.
Editors with Emacs emulation
The Cocoa text system uses some of the same terminology and understands many Emacs navigation bindings. This is possible because the native UI uses the Command key (equivalent to Super) instead of the Control key.
Eclipse (IDE) provides a set of Emacs keybindings.
Epsilon (text editor) Defaults to Emacs emulation and supports a vi mode.
GNOME Builder has an emulation mode for Emacs.
GNU Readline is a line editor that understands the standard Emacs navigation keybindings. It also has a vi emulation mode.
IntelliJ IDEA provides a set of Emacs keybindings.
JED has an emulation mode for Emacs.
Joe's Own Editor emulates Emacs keybindings when invoked as .
MATLAB provides Emacs keybindings for its editor.
Multi-Edit provides Emacs keybindings for its editor.
KornShell has an Emacs line editing mode that predates Gnu Readline.
Visual Studio Code has multiple extensions available to emulate Emacs keybindings.
Oracle SQL Developer can save and load alternative keyboard-shortcut layouts. One of the built-in layouts provides Emacs-like keybindings, including using different commands to achieve closer behavior.
Features
Emacs is primarily a text editor and is designed for manipulating pieces of text, although it is capable of formatting and printing documents like a word processor by interfacing with external programs such as LaTeX, Ghostscript or a web browser. Emacs provides commands to manipulate and differentially display semantic units of text such as words, sentences, paragraphs and source code constructs such as functions. It also features keyboard macros for performing user-defined batches of editing commands.
GNU Emacs is a real-time display editor, as its edits are displayed onscreen as they occur. This is standard behavior for modern text editors but EMACS was among the earliest to implement this. The alternative is having to issue a distinct command to display text, (e.g. before or after modifying it). This was common in earlier (or merely simpler) line and context editors, such as QED (BTS, CTSS, Multics), ed (Unix), ED (CP/M), and Edlin (DOS).
General architecture
Almost all of the functionality in Emacs, including basic editing operations such as the insertion of characters into a file, is achieved through functions written in a dialect of the Lisp programming language. The dialect used in GNU Emacs is known as Emacs Lisp (Elisp), and was developed expressly to port Emacs to GNU and Unix. The Emacs Lisp layer sits atop a stable core of basic services and platform abstraction written in the C programming language, which enables GNU Emacs to be ported to a wide variety of operating systems and architectures without modifying the implementation semantics of the Lisp system where most of the editor lives. In this Lisp environment, variables and functions can be modified with no need to rebuild or restart Emacs, with even newly redefined versions of core editor features being asynchronously compiled and loaded into the live environment to replace existing definitions. Modern GNU Emacs features both bytecode and native code compilation for Emacs Lisp.
All configuration is stored in variables, classes, and data structures, and changed by simply updating these live. The use of a Lisp dialect in this case is a key advantage, as Lisp syntax consists of so-called symbolic expressions (or sexprs), which can act as both evaluatable code expressions and as a data serialisation format akin to, but simpler and more general than, well known ones such as XML, JSON, and YAML. In this way there is little difference in practice between customising existing features and writing new ones, both of which are accomplished in the same basic way. This is operatively different from most modern extensible editors, for instance such as VS Code, in which separate languages are used to implement the interface and features of the editor and to encode its user-defined configuration and options. The goal of Emacs' open design is to transparently expose Emacs' internals to the Emacs user during normal use in the same way that they would be exposed to the Emacs developer working on the git tree, and to collapse as much as possible of the distinction between using Emacs and programming Emacs, while still providing a stable, practical, and responsive editing environment for novice users.
Interactive data
The main text editing data structure is the buffer, a memory region containing data (usually text) with associated attributes. The most important of these are:
The point: the editing cursor;
The mark: a settable location which, along with the point, enables selection of
The region: a conceptually contiguous collection of text to which editing commands will be applied;
The name and inode of the file the buffer is visiting (if any);
The default directory, where any OS-level commands will be executed from by default;
The buffer's modes, including a major mode possibly several minor modes
The buffer encoding, the method by which Emacs represents buffer data to the user;
and a variety of buffer local variables and Emacs Lisp state.
Modes, in particular, are an important concept in Emacs, providing a mechanism to disaggregate Emacs' functionality into sets of behaviours and keybinds relevant to specific buffers' data. Major modes provide a general package of functions and commands relevant to a buffer's data and the way users might be interacting with it (e.g. editing source code in a specific language, editing hex, viewing the filesystem, interacting with git, etc.), and minor modes define subsidiary collections of functionality applicable across many major modes (such as auto-save-mode). Minor modes can be toggled on or off both locally to each buffer as well as globally across all buffers, while major modes can only be toggled per-buffer. Any other data relevant to a buffer but not bundled into a mode can be handled by simply focussing that buffer and live modifying the relevant data directly.
Any interaction with the editor (like key presses or clicking a mouse button) is realized by evaluating Emacs Lisp code, typically a command, which is a function explicitly designed for interactive use. Keys can be arbitrarily redefined and commands can also be accessed by name; some commands evaluate arbitrary Emacs Lisp code provided by the user in various ways (e.g. a family of eval- functions, operating on the buffer, region, or individual expression). Even the simplest user inputs (such a printable characters) are effectuated as Emacs Lisp functions, such as the self-insert-command , bound by default to most keyboard keys in a typical text editing buffer, which parameterises itself with the locale-defined character associated with the key used to call it.
For example, pressing the key in a buffer that accepts text input evaluates the code , which inserts one copy of the character constant ?f at point. The 1, in this case, is determined by what Emacs terms the universal argument: all Emacs command code accepts a numeric value which, in its simplest usage, indicates repetition of an action, but in more complex cases (where repetition doesn't make sense) can yield other behaviours. These arguments may be supplied via command prefices, such as , or more compactly , which expands to . When no prefix is supplied, the universal argument is 1: every command implicitly runs once, but may be called multiply, or in a different way, when supplied with such a prefix. Such arguments may also be non-positive where it makes sense for them to be so - it is up to the function accepting the argument to determine, according to its own semantics, what a given number means to it. One common usage is for functions to perform actions in reverse simply by checking the sign of the universal argument, such as a sort command which sorts in obverse by default and in reverse when called with a negative argument, using the absolute value of its argument as the sorting key (e.g. -7 sorting in reverse by column index (or delimiter) 7), or undo/redo, which are simply negatives of each other (traversing forward and backward through a recursive history of diffs by some number of steps at a time).
Command language
Because of its relatively large vocabulary of commands, Emacs features a long-established command language, to concisely express the keystrokes necessary to perform an action. This command language recognises the following shift and modifier keys: , , , , , and . Not all of these may be present on an IBM-style keyboard, though they can usually be configured as desired. These are represented in command language as the respective prefices: C-, A-, S-, M-, s-, and H-. Keys whose names are only printable with more than one character are enclosed in angle brackets. Thus, a keyboard shortcut such as (check dependent formulas and calculate all cells in all open workbooks in Excel) would be rendered in Emacs command language as C-A-S-<f9>, while an Emacs command like (incremental file search by filename-matching regexp), would be expressed as M-s f C-M-s. Command language is also used to express the actions needed to invoke commands with no assigned shortcut: for example, the command scratch-buffer (which initialises a buffer in memory for temporary text storage and manipulation), when invoked by the user, will be reported back as M-x scra <return>, with Emacs scanning the namespace of contextually available commands to return the shortest sequence of keystrokes which uniquely lexicate it.
Dynamic display
Because Emacs predates modern standard terminology for graphical user interfaces, it uses somewhat divergent names for familiar interface elements. Buffers, the data that Emacs users interact with, are displayed to the user inside windows, which are tiled portions of the terminal screen or the GUI window, which Emacs refers to as frames; in modern terminology, an Emacs frame would be a window and an Emacs window would be a split. Depending on configuration, windows can include their own scroll bars, line numbers, sometimes a 'header line' typically to ease navigation, and a mode line at the bottom (usually displaying buffer name, the active modes and point position of the buffer among others). The bottom of every frame is used for output messages (then called 'echo area') and text input for commands (then called 'minibuffer').
In general, Emacs display elements (windows, frames, etc.) do not belong to any specific data or process. Buffers are not associated with windows, and multiple windows can be opened onto the same buffer, for example to track different parts of a long text side-by-side without scrolling back and forth, and multiple buffers can share the same text, for example to take advantage of different major modes in a mixed-language file. Similarly, Emacs instances are not associated with particular frames, and multiple frames can be opened displaying a single running Emacs process, e.g. a frame per screen in a multi-monitor setup, or a terminal frame connected via ssh from a remote system and a graphical frame displaying the same Emacs process via the local system's monitor.
Just as buffers don't require windows, running Emacs processes do not require any frames, and one common usage pattern is to deploy Emacs as an editing server: running it as a headless daemon and connecting to it via a frame-spawning client. This server can then be made available in any situation where an editor is required, simply by declaring the client program to be the user's EDITOR or VISUAL variable. Such a server continues to run in the background, managing any child processes, accumulating stdin from open pipes, ports, or fifos, performing periodic or pre-programmed actions, and remembering buffer undo history, saved text snippets, command history, and other user state between editing sessions. In this mode of operation, Emacs overlaps the functionality of programs like screen and tmux.
Because of its separation of display concerns from editing functionality, Emacs can display roughly similarly on any device more complex than a dumb terminal, including providing typical graphical WIMP elements on sufficiently featureful text terminals - though graphical frames are the preferred mode of display, providing a strict superset of the features of text terminal frames.
Customizability and extensibility
User actions can be recorded into macros and replayed to automate complex, repetitive tasks. This is often done on an ad-hoc basis, with each macro discarded after use, although macros can be saved and invoked later.
Because of the uniformity of Emacs' features' definition in terms of Emacs Lisp, what counts as a "user action" for the purposes of macro-automation is flexible: macros may include, e.g., keypresses, commands, mouse clicks, other macros, and anything that can be effectuated via these. Macros can thus be recursive, and can be defined and invoked inside of macros.
At startup, Emacs executes an Emacs Lisp script named (recent versions also look for , , and , as well as similar variations on . Emacs reads first if it exists, and it can be used to configure or short-circuit core Emacs features before they load, such as the graphical display system or package manager. It will then execute the first version or that it finds, ignoring the rest. This personal customization file can be arbitrarily long and complex, but typical content includes:
Setting global variables or invoking functions to customize Emacs behaviour, for example
Key bindings to override standard ones and to add shortcuts for commands that the user finds convenient but don't have a key binding by default. Example:
Loading, enabling and initializing extensions (Emacs comes with many extensions, but only a few are loaded by default.)
Configuring event hooks to run arbitrary code at specific times, for example to automatically recompile source code after saving a buffer ()
Executing arbitrary files, usually to split an overly long configuration file into manageable and homogeneous parts ( and are traditional locations for these personal scripts)
The customize extension allows the user to set configuration properties such as the color scheme interactively, from within Emacs, in a more user-friendly way than by setting variables in : it offers search, descriptions and help text, multiple choice inputs, reverting to defaults, modification of the running Emacs instance without reloading, and other conveniences similar to the preferences functionality of other programs. The customized values are saved in (or another designated file) automatically.
Themes, affecting the choice of fonts and colours, are defined as Emacs Lisp files and chosen through the customize extension.
Modes, which support editing a range of programming languages (e.g., emacs-lisp-mode, c-mode, java-mode, ESS for R) by changing fonts to highlight the code and keybindings modified (foreword-function vs. forward-page). Other modes include ones that support editing spreadsheets (dismal) and structured text.
Self-documenting
The first Emacs contained a help library that included documentation for every command, variable and internal function. Because of this, Emacs proponents described the software as self-documenting in that it presents the user with information on its normal features and its current state. Each function includes a documentation string that is displayed to the user on request, a practice that subsequently spread to programming languages including Lisp, Java, Perl, and Python. This help system can take users to the actual code for each function, whether from a built-in library or an added third-party library.
Emacs also has a built-in tutorial. Emacs displays instructions for performing simple editing commands and invoking the tutorial when it is launched with no file to edit. The tutorial is by Stuart Cracraft and Richard Stallman.
Culture
Church of Emacs
The Church of Emacs, formed by Richard Stallman, is a parody religion created for Emacs users. While it refers to vi as the editor of the beast (vi-vi-vi being 6-6-6 in Roman numerals), it does not oppose the use of vi; rather, it calls it proprietary software anathema. ("Using a free version of vi is not a sin but a penance.") The Church of Emacs has its own newsgroup, , that has posts purporting to support this parody religion. Supporters of vi have created an opposing Cult of vi.
Stallman has jokingly referred to himself as St I GNU cius, a saint in the Church of Emacs. This is in reference to Ignatius of Antioch, an early Church father venerated in Christianity.
Terminology
The word emacs is sometimes pluralized as emacsen, by phonetic analogy with boxen and VAXen, referring to different varieties of Emacs.
| Technology | Office and data management | null |
18933600 | https://en.wikipedia.org/wiki/File%20format | File format | A file format is a standard way that information is encoded for storage in a computer file. It specifies how bits are used to encode information in a digital storage medium. File formats may be either proprietary or free.
Some file formats are designed for very particular types of data: PNG files, for example, store bitmapped images using lossless data compression. Other file formats, however, are designed for storage of several different types of data: the Ogg format can act as a container for different types of multimedia including any combination of audio and video, with or without text (such as subtitles), and metadata. A text file can contain any stream of characters, including possible control characters, and is encoded in one of various character encoding schemes. Some file formats, such as HTML, scalable vector graphics, and the source code of computer software are text files with defined syntaxes that allow them to be used for specific purposes.
Specifications
File formats often have a published specification describing the encoding method and enabling testing of program intended functionality. Not all formats have freely available specification documents, partly because some developers view their specification documents as trade secrets, and partly because other developers never author a formal specification document, letting precedent set by other already existing programs that use the format define the format via how these existing programs use it.
If the developer of a format does not publish free specifications, another developer looking to utilize that kind of file must either reverse engineer the file to find out how to read it or acquire the specification document from the format's developers for a fee and by signing a non-disclosure agreement. The latter approach is possible only when a formal specification document exists. Both strategies require significant time, money, or both; therefore, file formats with publicly available specifications tend to be supported by more programs.
Patents
Patent law, rather than copyright, is more often used to protect a file format. Although patents for file formats are not directly permitted under US law, some formats encode data using patented algorithms. For example, prior to 2004, using compression with the GIF file format required the use of a patented algorithm, and though the patent owner did not initially enforce their patent, they later began collecting royalty fees. This has resulted in a significant decrease in the use of GIFs, and is partly responsible for the development of the alternative PNG format. However, the GIF patent expired in the US in mid-2003, and worldwide in mid-2004.
Identifying file type
Different operating systems have traditionally taken different approaches to determining a particular file's format, with each approach having its own advantages and disadvantages. Most modern operating systems and individual applications need to use all of the following approaches to read "foreign" file formats, if not work with them completely.
Filename extension
One popular method used by many operating systems, including Windows, macOS, CP/M, DOS, VMS, and VM/CMS, is to determine the format of a file based on the end of its name, more specifically the letters following the final period. This portion of the filename is known as the filename extension. For example, HTML documents are identified by names that end with (or ), and GIF images by . In the original FAT file system, file names were limited to an eight-character identifier and a three-character extension, known as an 8.3 filename. There are a limited number of three-letter extensions, which can cause a given extension to be used by more than one program. Many formats still use three-character extensions even though modern operating systems and application programs no longer have this limitation. Since there is no standard list of extensions, more than one format can use the same extension, which can confuse both the operating system and users.
One artifact of this approach is that the system can easily be tricked into treating a file as a different format simply by renaming it — an HTML file can, for instance, be easily treated as plain text by renaming it from to . Although this strategy was useful to expert users who could easily understand and manipulate this information, it was often confusing to less technical users, who could accidentally make a file unusable (or "lose" it) by renaming it incorrectly.
This led most versions of Windows and Mac OS to hide the extension when listing files. This prevents the user from accidentally changing the file type, and allows expert users to turn this feature off and display the extensions.
Hiding the extension, however, can create the appearance of two or more identical filenames in the same folder. For example, a company logo may be needed both in format (for publishing) and .png format (for web sites). With the extensions visible, these would appear as the unique filenames: "" and "". On the other hand, hiding the extensions would make both appear as "", which can lead to confusion.
Hiding extensions can also pose a security risk. For example, a malicious user could create an executable program with an innocent name such as "". The "" would be hidden and an unsuspecting user would see "", which would appear to be a JPEG image, usually unable to harm the machine. However, the operating system would still see the "" extension and run the program, which would then be able to cause harm to the computer. The same is true with files with only one extension: as it is not shown to the user, no information about the file can be deduced without explicitly investigating the file. To further trick users, it is possible to store an icon inside the program, in which case some operating systems' icon assignment for the executable file () would be overridden with an icon commonly used to represent JPEG images, making the program look like an image. Extensions can also be spoofed: some Microsoft Word macro viruses create a Word file in template format and save it with a extension. Since Word generally ignores extensions and looks at the format of the file, these would open as templates, execute, and spread the virus. This represents a practical problem for Windows systems where extension-hiding is turned on by default.
Internal metadata
A second way to identify a file format is to use information regarding the format stored inside the file itself, either information meant for this purpose or binary strings that happen to always be in specific locations in files of some formats. Since the easiest place to locate them is at the beginning, such area is usually called a file header when it is greater than a few bytes, or a magic number if it is just a few bytes long.
File header
The metadata contained in a file header are usually stored at the start of the file, but might be present in other areas too, often including the end, depending on the file format or the type of data contained. Character-based (text) files usually have character-based headers, whereas binary formats usually have binary headers, although this is not a rule. Text-based file headers usually take up more space, but being human-readable, they can easily be examined by using simple software such as a text editor or a hexadecimal editor.
As well as identifying the file format, file headers may contain metadata about the file and its contents. For example, most image files store information about image format, size, resolution and color space, and optionally authoring information such as who made the image, when and where it was made, what camera model and photographic settings were used (Exif), and so on. Such metadata may be used by software reading or interpreting the file during the loading process and afterwards.
File headers may be used by an operating system to quickly gather information about a file without loading it all into memory, but doing so uses more of a computer's resources than reading directly from the directory information. For instance, when a graphic file manager has to display the contents of a folder, it must read the headers of many files before it can display the appropriate icons, but these will be located in different places on the storage medium thus taking longer to access. A folder containing many files with complex metadata such as thumbnail information may require considerable time before it can be displayed.
If a header is binary hard-coded such that the header itself needs complex interpretation in order to be recognized, especially for metadata content protection's sake, there is a risk that the file format can be misinterpreted. It may even have been badly written at the source. This can result in corrupt metadata which, in extremely bad cases, might even render the file unreadable.
A more complex example of file headers are those used for wrapper (or container) file formats.
Magic number
One way to incorporate file type metadata, often associated with Unix and its derivatives, is to store a "magic number" inside the file itself. Originally, this term was used for a specific set of 2-byte identifiers at the beginnings of files, but since any binary sequence can be regarded as a number, any feature of a file format which uniquely distinguishes it can be used for identification. GIF images, for instance, always begin with the ASCII representation of either GIF87a or GIF89a, depending upon the standard to which they adhere. Many file types, especially plain-text files, are harder to spot by this method. HTML files, for example, might begin with the string <html> (which is not case sensitive), or an appropriate document type definition that starts with <!DOCTYPE html, or, for XHTML, the XML identifier, which begins with <?xml. The files can also begin with HTML comments, random text, or several empty lines, but still be usable HTML.
The magic number approach offers better guarantees that the format will be identified correctly, and can often determine more precise information about the file. Since reasonably reliable "magic number" tests can be fairly complex, and each file must effectively be tested against every possibility in the magic database, this approach is relatively inefficient, especially for displaying large lists of files (in contrast, file name and metadata-based methods need to check only one piece of data, and match it against a sorted index). Also, data must be read from the file itself, increasing latency as opposed to metadata stored in the directory. Where file types do not lend themselves to recognition in this way, the system must fall back to metadata. It is, however, the best way for a program to check if the file it has been told to process is of the correct format: while the file's name or metadata may be altered independently of its content, failing a well-designed magic number test is a pretty sure sign that the file is either corrupt or of the wrong type. On the other hand, a valid magic number does not guarantee that the file is not corrupt or is of a correct type.
So-called shebang lines in script files are a special case of magic numbers. Here, the magic number is human-readable text that identifies a specific command interpreter and options to be passed to the command interpreter.
Another operating system using magic numbers is AmigaOS, where magic numbers were called "Magic Cookies" and were adopted as a standard system to recognize executables in Hunk executable file format and also to let single programs, tools and utilities deal automatically with their saved data files, or any other kind of file types when saving and loading data. This system was then enhanced with the Amiga standard Datatype recognition system. Another method was the FourCC method, originating in OSType on Macintosh, later adapted by Interchange File Format (IFF) and derivatives.
External metadata
A final way of storing the format of a file is to explicitly store information about the format in the file system, rather than within the file itself.
This approach keeps the metadata separate from both the main data and the name, but is also less portable than either filename extensions or "magic numbers", since the format has to be converted from filesystem to filesystem. While this is also true to an extent with filename extensions— for instance, for compatibility with MS-DOS's three character limit— most forms of storage have a roughly equivalent definition of a file's data and name, but may have varying or no representation of further metadata.
Note that zip files or archive files solve the problem of handling metadata. A utility program collects multiple files together along with metadata about each file and the folders/directories they came from all within one new file (e.g. a zip file with extension ). The new file is also compressed and possibly encrypted, but now is transmissible as a single file across operating systems by FTP transmissions or sent by email as an attachment. At the destination, the single file received has to be unzipped by a compatible utility to be useful. The problems of handling metadata are solved this way using zip files or archive files.
Mac OS type-codes
The Mac OS' Hierarchical File System stores codes for creator and type as part of the directory entry for each file. These codes are referred to as OSTypes. These codes could be any 4-byte sequence but were often selected so that the ASCII representation formed a sequence of meaningful characters, such as an abbreviation of the application's name or the developer's initials. For instance a HyperCard "stack" file has a creator of (from Hypercard's previous name, "WildCard") and a type of . The BBEdit text editor has a creator code of referring to its original programmer, Rich Siegel. The type code specifies the format of the file, while the creator code specifies the default program to open it with when double-clicked by the user. For example, the user could have several text files all with the type code of , but each open in a different program, due to having differing creator codes. This feature was intended so that, for example, human-readable plain-text files could be opened in a general-purpose text editor, while programming or HTML code files would open in a specialized editor or IDE. However, this feature was often the source of user confusion, as which program would launch when the files were double-clicked was often unpredictable. RISC OS uses a similar system, consisting of a 12-bit number which can be looked up in a table of descriptions—e.g. the hexadecimal number is "aliased" to , representing a PostScript file.
macOS uniform type identifiers (UTIs)
A Uniform Type Identifier (UTI) is a method used in macOS for uniquely identifying "typed" classes of entities, such as file formats. It was developed by Apple as a replacement for OSType (type & creator codes).
The UTI is a Core Foundation string, which uses a reverse-DNS string. Some common and standard types use a domain called (e.g. for a Portable Network Graphics image), while other domains can be used for third-party types (e.g. for Portable Document Format). UTIs can be defined within a hierarchical structure, known as a conformance hierarchy. Thus, conforms to a supertype of , which itself conforms to a supertype of . A UTI can exist in multiple hierarchies, which provides great flexibility.
In addition to file formats, UTIs can also be used for other entities which can exist in macOS, including:
Pasteboard data
Folders (directories)
Translatable types (as handled by the Translation Manager)
Bundles
Frameworks
Streaming data
Aliases and symlinks
VSAM Catalog
In IBM OS/VS through z/OS, the VSAM catalog
(prior to ICF catalogs)
and the VSAM Volume Record in the VSAM Volume Data Set (VVDS) (with ICF catalogs) identifies the type
of VSAM dataset.
VTOC
In IBM OS/360 through z/OS, a format 1 or 7 Data Set Control Block (DSCB) in the Volume Table of Contents (VTOC) identifies the
Dataset Organization (DSORG) of the dataset described by it.
OS/2 extended attributes
The HPFS, FAT12, and FAT16 (but not FAT32) filesystems allow the storage of "extended attributes" with files. These comprise an arbitrary set of triplets with a name, a coded type for the value, and a value, where the names are unique and values can be up to 64 KB long. There are standardized meanings for certain types and names (under OS/2). One such is that the ".TYPE" extended attribute is used to determine the file type. Its value comprises a list of one or more file types associated with the file, each of which is a string, such as "Plain Text" or "HTML document". Thus a file may have several types.
The NTFS filesystem also allows storage of OS/2 extended attributes, as one of the file forks, but this feature is merely present to support the OS/2 subsystem (not present in XP), so the Win32 subsystem treats this information as an opaque block of data and does not use it. Instead, it relies on other file forks to store meta-information in Win32-specific formats. OS/2 extended attributes can still be read and written by Win32 programs, but the data must be entirely parsed by applications.
POSIX extended attributes
On Unix and Unix-like systems, the ext2, ext3, ext4, ReiserFS version 3, XFS, JFS, FFS, and HFS+ filesystems allow the storage of extended attributes with files. These include an arbitrary list of "name=value" strings, where the names are unique and a value can be accessed through its related name.
PRONOM unique identifiers (PUIDs)
The PRONOM Persistent Unique Identifier (PUID) is an extensible scheme of persistent, unique, and unambiguous identifiers for file formats, which has been developed by The National Archives of the UK as part of its PRONOM technical registry service. PUIDs can be expressed as Uniform Resource Identifiers using the namespace. Although not yet widely used outside of the UK government and some digital preservation programs, the PUID scheme does provide greater granularity than most alternative schemes.
MIME types
MIME types are widely used in many Internet-related applications, and increasingly elsewhere, although their usage for on-disc type information is rare. These consist of a standardised system of identifiers (managed by IANA) consisting of a type and a sub-type, separated by a slash—for instance, or . These were originally intended as a way of identifying what type of file was attached to an e-mail, independent of the source and target operating systems. MIME types identify files on BeOS, AmigaOS 4.0 and MorphOS, as well as store unique application signatures for application launching. In AmigaOS and MorphOS, the Mime type system works in parallel with Amiga specific Datatype system.
There are problems with the MIME types though; several organizations and people have created their own MIME types without registering them properly with IANA, which makes the use of this standard awkward in some cases.
File format identifiers (FFIDs)
File format identifiers are another, not widely used way to identify file formats according to their origin and their file category. It was created for the Description Explorer suite of software. It is composed of several digits of the form . The first part indicates the organization origin/maintainer (this number represents a value in a company/standards organization database), and the 2 following digits categorize the type of file in hexadecimal. The final part is composed of the usual filename extension of the file or the international standard number of the file, padded left with zeros. For example, the PNG file specification has the FFID of where indicates an image file, is the standard number and indicates the International Organization for Standardization (ISO).
File content based format identification
Another less popular way to identify the file format is to examine the file contents for distinguishable patterns among file types. The contents of a file are a sequence of bytes and a byte has 256 unique permutations (0–255). Thus, counting the occurrence of byte patterns that is often referred to as byte frequency distribution gives distinguishable patterns to identify file types. There are many content-based file type identification schemes that use a byte frequency distribution to build the representative models for file type and use any statistical and data mining techniques to identify file types.
File structure
There are several types of ways to structure data in a file. The most usual ones are described below.
Unstructured formats (raw memory dumps)
Earlier file formats used raw data formats that consisted of directly dumping the memory images of one or more structures into the file.
This has several drawbacks. Unless the memory images also have reserved spaces for future extensions, extending and improving this type of structured file is very difficult. It also creates files that might be specific to one platform or programming language (for example a structure containing a Pascal string is not recognized as such in C). On the other hand, developing tools for reading and writing these types of files is very simple.
The limitations of the unstructured formats led to the development of other types of file formats that could be easily extended and be backward compatible at the same time.
Chunk-based formats
In this kind of file structure, each piece of data is embedded in a container that somehow identifies the data. The container's scope can be identified by start- and end-markers of some kind, by an explicit length field somewhere, or by fixed requirements of the file format's definition.
Throughout the 1970s, many programs used formats of this general kind. For example, word-processors such as troff, Script, and Scribe, and database export files such as CSV. Electronic Arts and Commodore-Amiga also used this type of file format in 1985, with their IFF (Interchange File Format) file format.
A container is sometimes called a "chunk", although "chunk" may also imply that each piece is small, and/or that chunks do not contain other chunks; many formats do not impose those requirements.
The information that identifies a particular "chunk" may be called many different things, often terms including "field name", "identifier", "label", or "tag". The identifiers are often human-readable, and classify parts of the data: for example, as a "surname", "address", "rectangle", "font name", etc. These are not the same thing as identifiers in the sense of a database key or serial number (although an identifier may well identify its as such a key).
With this type of file structure, tools that do not know certain chunk identifiers simply skip those that they do not understand. Depending on the
actual meaning of the skipped data, this may or may not be useful (CSS explicitly defines such behavior).
This concept has been used again and again by RIFF (Microsoft-IBM equivalent of IFF), PNG, JPEG storage, DER (Distinguished Encoding Rules) encoded streams and files (which were originally described in CCITT X.409:1984 and therefore predate IFF), and Structured Data Exchange Format (SDXF).
Indeed, any data format must identify the significance of its component parts, and embedded boundary-markers are an obvious way to do so:
MIME headers do this with a colon-separated label at the start of each logical line. MIME headers cannot contain other MIME headers, though the data content of some headers has sub-parts that can be extracted by other conventions.
CSV and similar files often do this using a header records with field names, and with commas to mark the field boundaries. Like MIME, CSV has no provision for structures with more than one level.
XML and its kin can be loosely considered a kind of chunk-based format, since data elements are identified by markup that is akin to chunk identifiers. However, it has formal advantages such as schemas and validation, as well as the ability to represent more complex structures such as trees, DAGs, and charts. If XML is considered a "chunk" format, then SGML and its predecessor IBM GML are among the earliest examples of such formats.
JSON is similar to XML without schemas, cross-references, or a definition for the meaning of repeated field-names, and is often convenient for programmers.
YAML is similar to JSON, but use indentation to separate data chunks and aim to be more human-readable than JSON or XML.
Protocol Buffers are in turn similar to JSON, notably replacing boundary-markers in the data with field numbers, which are mapped to/from names by some external mechanism.
Directory-based formats
This is another extensible format, that closely resembles a file system (OLE Documents are actual filesystems), where the file is composed of 'directory entries' that contain the location of the data within the file itself as well as its signatures (and in certain cases its type). Good examples of these types of file structures are disk images, executables, OLE documents TIFF, libraries.
Some file formats like ODT and DOCX, being PKZIP-based, are both chunked and carry a directory.
The structure of a directory-based file format lends itself to modifications more easily than unstructured or chunk-based formats. The nature of this type of format allows users to carefully construct files that causes reader software to do things the authors of the format never intended to happen. An example of this is the zip bomb. Directory-based file formats also use values that point at other areas in the file but if some later data value points back at data that was read earlier, it can result in an infinite loop for any reader software that assumes the input file is valid and blindly follows the loop.
| Technology | File formats | null |
18933632 | https://en.wikipedia.org/wiki/Metadata | Metadata | Metadata (or metainformation) is "data that provides information about other data", but not the content of the data itself, such as the text of a message or the image itself. There are many distinct types of metadata, including:
Descriptive metadata – the descriptive information about a resource. It is used for discovery and identification. It includes elements such as title, abstract, author, and keywords.
Structural metadata – metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters. It describes the types, versions, relationships, and other characteristics of digital materials.
Administrative metadata – the information to help manage a resource, like resource type, and permissions, and when and how it was created.
Reference metadata – the information about the contents and quality of statistical data.
Statistical metadata – also called process data, may describe processes that collect, process, or produce statistical data.
Legal metadata – provides information about the creator, copyright holder, and public licensing, if provided.
Metadata is not strictly bound to one of these categories, as it can describe a piece of data in many other ways.
History
Metadata has various purposes. It can help users find relevant information and discover resources. It can also help organize electronic resources, provide digital identification, and archive and preserve resources. Metadata allows users to access resources by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, and giving location information". Metadata of telecommunication activities including Internet traffic is very widely collected by various national governmental organizations. This data is used for the purposes of traffic analysis and can be used for mass surveillance.
Metadata was traditionally used in the card catalogs of libraries until the 1980s when libraries converted their catalog data to digital databases. In the 2000s, as data and information were increasingly stored digitally, this digital data was described using metadata standards.
The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary then, we have statements in an object language about subject descriptions of data and token codes for the data. We also have statements in a meta language describing the data relationships and transformations, and ought/is relations between norm and data."
Unique metadata standards exist for different disciplines (e.g., museum collections, digital audio files, websites, etc.). Describing the contents and context of data or data files increases its usefulness. For example, a web page may include metadata specifying what software language the page is written in (e.g., HTML), what tools were used to create it, what subjects the page is about, and where to find more information about the subject. This metadata can automatically improve the reader's experience and make it easier for users to find the web page online. A CD may include metadata providing information about the musicians, singers, and songwriters whose work appears on the disc.
In many countries, government organizations routinely store metadata about emails, telephone calls, web pages, video traffic, IP connections, and cell phone locations.
Definition
Metadata means "data about data". Metadata is defined as the data providing information about one or more aspects of the data; it is used to summarize basic information about data that can make tracking and working with specific data easier. Some examples include:
Means of creation of the data
Purpose of the data
Time and date of creation
Creator or author of the data
Location on a computer network where the data was created
Standards used
Data quality
Source of the data
Process used to create the data
For example, a digital image may include metadata that describes the size of the image, its color depth, resolution, when it was created, the shutter speed, and other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, and a short summary of the document. Metadata within web pages can also contain descriptions of page content, as well as key words linked to the content. These links are often called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s. The reliance on metatags in web searches was decreased in the late 1990s because of "keyword stuffing", whereby metatags were being largely misused to trick search engines into thinking some websites had more relevance in the search than they really did.
Metadata can be stored and managed in a database, often called a metadata registry or metadata repository. However, without context and a point of reference, it might be impossible to identify metadata just by looking at it. For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into an without any other context, the numbers themselves can be perceived as the data. But if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified as information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, which is "structural metadata" i.e. "data about the containers of data"; rather than the alternative sense "content about individual instances of data content" or metacontent, the type of data usually found in library catalogs. Since then the fields of information management, information science, information technology, librarianship, and GIS have widely adopted the term. In these fields, the word metadata is defined as "data about data". While this is the generally accepted definition, various disciplines have adopted their own more specific explanations and uses of the term.
Slate reported in 2013 that the United States government's interpretation of "metadata" could be broad, and might include message content such as the subject lines of emails.
Types
While the metadata application is manifold, covering a large variety of fields, there are specialized and well-accepted models to specify types of metadata. Bretherton & Singley (1994) distinguish between two distinct classes: structural/control metadata and guide metadata. Structural metadata describes the structure of database objects such as tables, columns, keys and indexes. Guide metadata helps humans find specific items and is usually expressed as a set of keywords in a natural language. According to Ralph Kimball, metadata can be divided into three categories: technical metadata (or internal metadata), business metadata (or external metadata), and process metadata.
NISO distinguishes three types of metadata: descriptive, structural, and administrative. Descriptive metadata is typically used for discovery and identification, as information to search and locate an object, such as title, authors, subjects, keywords, and publisher. Structural metadata describes how the components of an object are organized. An example of structural metadata would be how pages are ordered to form chapters of a book. Finally, administrative metadata gives information to help manage the source. Administrative metadata refers to the technical information, such as file type, or when and how the file was created. Two sub-types of administrative metadata are rights management metadata and preservation metadata. Rights management metadata explains intellectual property rights, while preservation metadata contains information to preserve and save a resource.
Statistical data repositories have their own requirements for metadata in order to describe not only the source and quality of the data but also what statistical processes were used to create the data, which is of particular importance to the statistical community in order to both validate and improve the process of statistical data production.
An additional type of metadata beginning to be more developed is accessibility metadata. Accessibility metadata is not a new concept to libraries; however, advances in universal design have raised its profile. Projects like Cloud4All and GPII identified the lack of common terminologies and models to describe the needs and preferences of users and information that fits those needs as a major gap in providing universal access solutions. Those types of information are accessibility metadata. Schema.org has incorporated several accessibility properties based on IMS Global Access for All Information Model Data Element Specification. The Wiki page WebSchemas/Accessibility lists several properties and their values. While the efforts to describe and standardize the varied accessibility needs of information seekers are beginning to become more robust, their adoption into established metadata schemas has not been as developed. For example, while Dublin Core (DC)'s "audience" and MARC 21's "reading level" could be used to identify resources suitable for users with dyslexia and DC's "format" could be used to identify resources available in braille, audio, or large print formats, there is more work to be done.
Structures
Metadata (metacontent) or, more correctly, the vocabularies used to assemble metadata (metacontent) statements, is typically structured according to a standardized concept using a well-defined metadata scheme, including metadata standards and metadata models. Tools such as controlled vocabularies, taxonomies, thesauri, data dictionaries, and metadata registries can be used to apply further standardization to the metadata. Structural metadata commonality is also of paramount importance in data model development and in database design.
Syntax
Metadata (metacontent) syntax refers to the rules created to structure the fields or elements of metadata (metacontent). A single metadata scheme may be expressed in a number of different markup or programming languages, each of which requires a different syntax. For example, Dublin Core may be expressed in plain text, HTML, XML, and RDF.
A common example of (guide) metacontent is the bibliographic classification, the subject, the Dewey Decimal class number. There is always an implied statement in any "classification" of some object. To classify an object as, for example, Dewey class number 514 (Topology) (i.e. books having the number 514 on their spine) the implied statement is: "<book><subject heading><514>". This is a subject-predicate-object triple, or more importantly, a class-attribute-value triple. The first 2 elements of the triple (class, attribute) are pieces of some structural metadata having a defined semantic. The third element is a value, preferably from some controlled vocabulary, some reference (master) data. The combination of the metadata and master data elements results in a statement which is a metacontent statement i.e. "metacontent = metadata + master data". All of these elements can be thought of as "vocabulary". Both metadata and master data are vocabularies that can be assembled into metacontent statements. There are many sources of these vocabularies, both meta and master data: UML, EDIFACT, XSD, Dewey/UDC/LoC, SKOS, ISO-25964, Pantone, Linnaean Binomial Nomenclature, etc. Using controlled vocabularies for the components of metacontent statements, whether for indexing or finding, is endorsed by ISO 25964: "If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved." This is particularly relevant when considering search engines of the internet, such as Google. The process indexes pages and then matches text strings using its complex algorithm; there is no intelligence or "inferencing" occurring, just the illusion thereof.
Hierarchical, linear, and planar schemata
Metadata schemata can be hierarchical in nature where relationships exist between metadata elements and elements are nested so that parent-child relationships exist between the elements.
An example of a hierarchical metadata schema is the IEEE LOM schema, in which metadata elements may belong to a parent metadata element.
Metadata schemata can also be one-dimensional, or linear, where each element is completely discrete from other elements and classified according to one dimension only.
An example of a linear metadata schema is the Dublin Core schema, which is one-dimensional.
Metadata schemata are often 2 dimensional, or planar, where each element is completely discrete from other elements but classified according to 2 orthogonal dimensions.
Granularity
The degree to which the data or metadata is structured is referred to as "granularity". "Granularity" refers to how much detail is provided. Metadata with a high granularity allows for deeper, more detailed, and more structured information and enables a greater level of technical manipulation. A lower level of granularity means that metadata can be created for considerably lower costs but will not provide as detailed information. The major impact of granularity is not only on creation and capture, but moreover on maintenance costs. As soon as the metadata structures become outdated, so too is the access to the referred data. Hence granularity must take into account the effort to create the metadata as well as the effort to maintain it.
Hypermapping
In all cases where the metadata schemata exceed the planar depiction, some type of hypermapping is required to enable display and view of metadata according to chosen aspect and to serve special views. Hypermapping frequently applies to layering of geographical and geological information overlays.
Standards
International standards apply to metadata. Much work is being accomplished in the national and international standards communities, especially ANSI (American National Standards Institute) and ISO (International Organization for Standardization) to reach a consensus on standardizing metadata and registries. The core metadata registry standard is ISO/IEC 11179 Metadata Registries (MDR), the framework for the standard is described in ISO/IEC 11179-1:2004. A new edition of Part 1 is in its final stage for publication in 2015 or early 2016. It has been revised to align with the current edition of Part 3, ISO/IEC 11179-3:2013 which extends the MDR to support the registration of Concept Systems.
(see ISO/IEC 11179). This standard specifies a schema for recording both the meaning and technical structure of the data for unambiguous usage by humans and computers. ISO/IEC 11179 standard refers to metadata as information objects about data, or "data about data". In ISO/IEC 11179 Part-3, the information objects are data about Data Elements, Value Domains, and other reusable semantic and representational information objects that describe the meaning and technical details of a data item. This standard also prescribes the details for a metadata registry, and for registering and administering the information objects within a Metadata Registry. ISO/IEC 11179 Part 3 also has provisions for describing compound structures that are derivations of other data elements, for example through calculations, collections of one or more data elements, or other forms of derived data. While this standard describes itself originally as a "data element" registry, its purpose is to support describing and registering metadata content independently of any particular application, lending the descriptions to being discovered and reused by humans or computers in developing new applications, databases, or for analysis of data collected in accordance with the registered metadata content. This standard has become the general basis for other kinds of metadata registries, reusing and extending the registration and administration portion of the standard.
The Geospatial community has a tradition of specialized geospatial metadata standards, particularly building on traditions of map- and image-libraries and catalogs. Formal metadata is usually essential for geospatial data, as common text-processing approaches are not applicable.
The Dublin Core metadata terms are a set of vocabulary terms that can be used to describe resources for the purposes of discovery. The original set of 15 classic metadata terms, known as the Dublin Core Metadata Element Set are endorsed in the following standards documents:
IETF RFC 5013
ISO Standard 15836-2009
NISO Standard Z39.85.
The W3C Data Catalog Vocabulary (DCAT) is an RDF vocabulary that supplements Dublin Core with classes for Dataset, Data Service, Catalog, and Catalog Record. DCAT also uses elements from FOAF, PROV-O, and OWL-Time. DCAT provides an RDF model to support the typical structure of a catalog that contains records, each describing a dataset or service.
Although not a standard, Microformat (also mentioned in the section metadata on the internet below) is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata. Microformat follows XHTML and HTML standards but is not a standard in itself. One advocate of microformats, Tantek Çelik, characterized a problem with alternative approaches:
Use
File metadata
Most common types of computer files can embed metadata, including documents, (e.g. Microsoft Office files, OpenDocument files, PDF) images, (e.g. JPEG, PNG) Video files, (e.g. AVI, MP4) and audio files. (e.g. WAV, MP3)
Metadata may be added to files by users, but some metadata is often automatically added to files by authoring applications or by devices used to produce the files, without user intervention.
While metadata in files are useful for finding them, they can be a privacy hazard when the files are shared. Using metadata removal tools to clean files before sharing them can mitigate this risk.
Photographs
Metadata may be written into a digital photo file that will identify who owns it, copyright and contact information, what brand or model of camera created the file, along with exposure information (shutter speed, f-stop, etc.) and descriptive information, such as keywords about the photo, making the file or image searchable on a computer and/or the Internet. Some metadata is created by the camera such as, color space, color channels, exposure time, and aperture (EXIF), while some is input by the photographer and/or software after downloading to a computer. Most digital cameras write metadata about the model number, shutter speed, etc., and some enable you to edit it; this functionality has been available on most Nikon DSLRs since the Nikon D3, on most new Canon cameras since the Canon EOS 7D, and on most Pentax DSLRs since the Pentax K-3. Metadata can be used to make organizing in post-production easier with the use of key-wording. Filters can be used to analyze a specific set of photographs and create selections on criteria like rating or capture time. On devices with geolocation capabilities like GPS (smartphones in particular), the location the photo was taken from may also be included.
Photographic Metadata Standards are governed by organizations that develop the following standards. They include, but are not limited to:
IPTC Information Interchange Model IIM (International Press Telecommunications Council)
IPTC Core Schema for XMP
XMP – Extensible Metadata Platform (an ISO standard)
Exif – Exchangeable image file format, Maintained by CIPA (Camera & Imaging Products Association) and published by JEITA (Japan Electronics and Information Technology Industries Association)
Dublin Core (Dublin Core Metadata Initiative – DCMI)
PLUS (Picture Licensing Universal System)
VRA Core (Visual Resource Association)
JPEG or JPG is a joint Photographic Experts Group
Video
Metadata is particularly useful in video, where information about its contents (such as transcripts of conversations and text descriptions of its scenes) is not directly understandable by a computer, but where an efficient search of the content is desirable. This is particularly useful in video applications such as Automatic Number Plate Recognition and Vehicle Recognition Identification software, wherein license plate data is saved and used to create reports and alerts. There are 2 sources in which video metadata is derived: (1) operational gathered metadata, that is information about the content produced, such as the type of equipment, software, date, and location; (2) human-authored metadata, to improve search engine visibility, discoverability, audience engagement, and providing advertising opportunities to video publishers. Avid's MetaSync and Adobe's Bridge are examples of professional video editing software with access to metadata.
Telecommunications
Information on the times, origins and destinations of phone calls, electronic messages, instant messages, and other modes of telecommunication, as opposed to message content, is another form of metadata. Bulk collection of this call detail record metadata by intelligence agencies has proven controversial after disclosures by Edward Snowden of the fact that certain Intelligence agencies such as the NSA had been (and perhaps still are) keeping online metadata on millions of internet users for up to a year, regardless of whether or not they [ever] were persons of interest to the agency.
Geospatial metadata
Geospatial metadata relates to Geographic Information Systems (GIS) files, maps, images, and other data that is location-based. Metadata is used in GIS to document the characteristics and attributes of geographic data, such as database files and data that is developed within a GIS. It includes details like who developed the data, when it was collected, how it was processed, and what formats it's available in, and then delivers the context for the data to be used effectively.
Creation
Metadata can be created either by automated information processing or by manual work. Elementary metadata captured by computers can include information about when an object was created, who created it, when it was last updated, file size, and file extension. In this context an object refers to any of the following:
A physical item such as a book, CD, DVD, a paper map, chair, table, flower pot, etc.
An electronic file such as a digital image, digital photo, electronic document, program file, database table, etc.
A metadata engine collects, stores and analyzes information about data and metadata in use within a domain.
Data virtualization
Data virtualization emerged in the 2000s as the new software technology to complete the virtualization "stack" in the enterprise. Metadata is used in data virtualization servers which are enterprise infrastructure components, alongside database and application servers. Metadata in these servers is saved as persistent repository and describe business objects in various enterprise systems and applications. Structural metadata commonality is also important to support data virtualization.
Statistics and census services
Standardization and harmonization work has brought advantages to industry efforts to build metadata systems in the statistical community. Several metadata guidelines and standards such as the European Statistics Code of Practice and ISO 17369:2013 (Statistical Data and Metadata Exchange or SDMX) provide key principles for how businesses, government bodies, and other entities should manage statistical data and metadata. Entities such as Eurostat, European System of Central Banks, and the U.S. Environmental Protection Agency have implemented these and other such standards and guidelines with the goal of improving "efficiency when managing statistical business processes".
Library and information science
Metadata has been used in various ways as a means of cataloging items in libraries in both digital and analog formats. Such data helps classify, aggregate, identify, and locate a particular book, DVD, magazine, or any object a library might hold in its collection. Until the 1980s, many library catalogs used 3x5 inch cards in file drawers to display a book's title, author, subject matter, and an abbreviated alpha-numeric string (call number) which indicated the physical location of the book within the library's shelves. The Dewey Decimal System employed by libraries for the classification of library materials by subject is an early example of metadata usage. The early paper catalog had information regarding whichever item was described on said card: title, author, subject, and a number as to where to find said item. Beginning in the 1980s and 1990s, many libraries replaced these paper file cards with computer databases. These computer databases make it much easier and faster for users to do keyword searches. Another form of older metadata collection is the use by the US Census Bureau of what is known as the "Long Form". The Long Form asks questions that are used to create demographic data to find patterns of distribution. Libraries employ metadata in library catalogues, most commonly as part of an Integrated Library Management System. Metadata is obtained by cataloging resources such as books, periodicals, DVDs, web pages or digital images. This data is stored in the integrated library management system, ILMS, using the MARC metadata standard. The purpose is to direct patrons to the physical or electronic location of items or areas they seek as well as to provide a description of the item/s in question.
More recent and specialized instances of library metadata include the establishment of digital libraries including e-print repositories and digital image libraries. While often based on library principles, the focus on non-librarian use, especially in providing metadata, means they do not follow traditional or common cataloging approaches. Given the custom nature of included materials, metadata fields are often specially created e.g. taxonomic classification fields, location fields, keywords, or copyright statement. Standard file information such as file size and format are usually automatically included. Library operation has for decades been a key topic in efforts toward international standardization. Standards for metadata in digital libraries include Dublin Core, METS, MODS, DDI, DOI, URN, PREMIS schema, EML, and OAI-PMH. Leading libraries in the world give hints on their metadata standards strategies. The use and creation of metadata in library and information science also include scientific publications:
Science
Metadata for scientific publications is often created by journal publishers and citation databases such as PubMed and Web of Science. The data contained within manuscripts or accompanying them as supplementary material is less often subject to metadata creation, though they may be submitted to e.g. biomedical databases after publication. The original authors and database curators then become responsible for metadata creation, with the assistance of automated processes. Comprehensive metadata for all experimental data is the foundation of the FAIR Guiding Principles, or the standards for ensuring research data are findable, accessible, interoperable, and reusable.
Such metadata can then be utilized, complemented, and made accessible in useful ways. OpenAlex is a free online index of over 200 million scientific documents that integrates and provides metadata such as sources, citations, author information, scientific fields, and research topics. Its API and open source website can be used for metascience, scientometrics, and novel tools that query this semantic web of papers. Another project under development, Scholia, uses the metadata of scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus using Wikidata's "main subject" property.
In research labor, transparent metadata about authors' contributions to works have been proposed – e.g. the role played in the production of the paper, the level of contribution and the responsibilities.
Moreover, various metadata about scientific outputs can be created or complemented – for instance, some organizations attempt to track and link citations of papers as 'Supporting', 'Mentioning' or 'Contrasting' the study. Other examples include developments of alternative metrics – which, beyond providing help for assessment and findability, also aggregate many of the public discussions about a scientific paper on social media such as Reddit, citations on Wikipedia, and reports about the study in the news media – and a call for showing whether or not the original findings are confirmed or could get reproduced.
Museums
Metadata in a museum context is the information that trained cultural documentation specialists, such as archivists, librarians, museum registrars and curators, create to index, structure, describe, identify, or otherwise specify works of art, architecture, cultural objects and their images. Descriptive metadata is most commonly used in museum contexts for object identification and resource recovery purposes.
Usage
Metadata is developed and applied within collecting institutions and museums in order to:
Facilitate resource discovery and execute search queries.
Create digital archives that store information relating to various aspects of museum collections and cultural objects, and serve archival and managerial purposes.
Provide public audiences access to cultural objects through publishing digital content online.
Standards
Many museums and cultural heritage centers recognize that given the diversity of artworks and cultural objects, no single model or standard suffices to describe and catalog cultural works. For example, a sculpted Indigenous artifact could be classified as an artwork, an archaeological artifact, or an Indigenous heritage item. The early stages of standardization in archiving, description and cataloging within the museum community began in the late 1990s with the development of standards such as Categories for the Description of Works of Art (CDWA), Spectrum, CIDOC Conceptual Reference Model (CRM), Cataloging Cultural Objects (CCO) and the CDWA Lite XML schema. These standards use HTML and XML markup languages for machine processing, publication and implementation. The Anglo-American Cataloguing Rules (AACR), originally developed for characterizing books, have also been applied to cultural objects, works of art and architecture. Standards, such as the CCO, are integrated within a Museum's Collections Management System (CMS), a database through which museums are able to manage their collections, acquisitions, loans and conservation. Scholars and professionals in the field note that the "quickly evolving landscape of standards and technologies" creates challenges for cultural documentarians, specifically non-technically trained professionals. Most collecting institutions and museums use a relational database to categorize cultural works and their images. Relational databases and metadata work to document and describe the complex relationships amongst cultural objects and multi-faceted works of art, as well as between objects and places, people, and artistic movements. Relational database structures are also beneficial within collecting institutions and museums because they allow for archivists to make a clear distinction between cultural objects and their images; an unclear distinction could lead to confusing and inaccurate searches.
Cultural objects
An object's materiality, function, and purpose, as well as the size (e.g., measurements, such as height, width, weight), storage requirements (e.g., climate-controlled environment), and focus of the museum and collection, influence the descriptive depth of the data attributed to the object by cultural documentarians. The established institutional cataloging practices, goals, and expertise of cultural documentarians and database structure also influence the information ascribed to cultural objects and the ways in which cultural objects are categorized. Additionally, museums often employ standardized commercial collection management software that prescribes and limits the ways in which archivists can describe artworks and cultural objects. As well, collecting institutions and museums use Controlled Vocabularies to describe cultural objects and artworks in their collections. Getty Vocabularies and the Library of Congress Controlled Vocabularies are reputable within the museum community and are recommended by CCO standards. Museums are encouraged to use controlled vocabularies that are contextual and relevant to their collections and enhance the functionality of their digital information systems. Controlled Vocabularies are beneficial within databases because they provide a high level of consistency, improving resource retrieval. Metadata structures, including controlled vocabularies, reflect the ontologies of the systems from which they were created. Often the processes through which cultural objects are described and categorized through metadata in museums do not reflect the perspectives of the maker communities.
Online content
Metadata has been instrumental in the creation of digital information systems and archives within museums and has made it easier for museums to publish digital content online. This has enabled audiences who might not have had access to cultural objects due to geographic or economic barriers to have access to them. In the 2000s, as more museums have adopted archival standards and created intricate databases, discussions about Linked Data between museum databases have come up in the museum, archival, and library science communities. Collection Management Systems (CMS) and Digital Asset Management tools can be local or shared systems. Digital Humanities scholars note many benefits of interoperability between museum databases and collections, while also acknowledging the difficulties of achieving such interoperability.
Law
United States
Problems involving metadata in litigation in the United States are becoming widespread. Courts have looked at various questions involving metadata, including the discoverability of metadata by parties. The Federal Rules of Civil Procedure have specific rules for discovery of electronically stored information, and subsequent case law applying those rules has elucidated on the litigant's duty to produce metadata when litigating in federal court. In October 2009, the Arizona Supreme Court has ruled that metadata records are public record. Document metadata have proven particularly important in legal environments in which litigation has requested metadata, that can include sensitive information detrimental to a certain party in court. Using metadata removal tools to "clean" or redact documents can mitigate the risks of unwittingly sending sensitive data. This process partially (see data remanence) protects law firms from potentially damaging leaking of sensitive data through electronic discovery.
Opinion polls have shown that 45% of Americans are "not at all confident" in the ability of social media sites to ensure their personal data is secure and 40% say that social media sites should not be able to store any information on individuals. 76% of Americans say that they are not confident that the information advertising agencies collect on them is secure and 50% say that online advertising agencies should not be allowed to record any of their information at all.
Australia
In Australia, the need to strengthen national security has resulted in the introduction of a new metadata storage law. This new law means that both security and policing agencies will be allowed to access up to 2 years of an individual's metadata, with the aim of making it easier to stop any terrorist attacks and serious crimes from happening.
Legislation
Legislative metadata has been the subject of some discussion in law.gov forums such as workshops held by the Legal Information Institute at the Cornell Law School on 22 and 23 March 2010. The documentation for these forums is titled, "Suggested metadata practices for legislation and regulations".
A handful of key points have been outlined by these discussions, section headings of which are listed as follows:
General Considerations
Document Structure
Document Contents
Metadata (elements of)
Layering
Point-in-time versus post-hoc
Healthcare
Australian medical research pioneered the definition of metadata for applications in health care. That approach offers the first recognized attempt to adhere to international standards in medical sciences instead of defining a proprietary standard under the World Health Organization (WHO) umbrella. The medical community yet did not approve of the need to follow metadata standards despite research that supported these standards.
Biomedical researches
Research studies in the fields of biomedicine and molecular biology frequently yield large quantities of data, including results of genome or meta-genome sequencing, proteomics data, and even notes or plans created during the course of research itself. Each data type involves its own variety of metadata and the processes necessary to produce these metadata. General metadata standards, such as ISA-Tab, allow researchers to create and exchange experimental metadata in consistent formats. Specific experimental approaches frequently have their own metadata standards and systems: metadata standards for mass spectrometry include mzML and SPLASH, while XML-based standards such as PDBML and SRA XML serve as standards for macromolecular structure and sequencing data, respectively.
The products of biomedical research are generally realized as peer-reviewed manuscripts and these publications are yet another source of data .
Data warehousing
A data warehouse (DW) is a repository of an organization's electronically stored data. Data warehouses are designed to manage and store the data. Data warehouses differ from business intelligence (BI) systems because BI systems are designed to use data to create reports and analyze the information, to provide strategic guidance to management. Metadata is an important tool in how data is stored in data warehouses. The purpose of a data warehouse is to house standardized, structured, consistent, integrated, correct, "cleaned" and timely data, extracted from various operational systems in an organization. The extracted data are integrated in the data warehouse environment to provide an enterprise-wide perspective. Data are structured in a way to serve the reporting and analytic requirements. The design of structural metadata commonality using a data modeling method such as entity-relationship model diagramming is important in any data warehouse development effort. They detail metadata on each piece of data in the data warehouse. An essential component of a data warehouse/business intelligence system is the metadata and tools to manage and retrieve the metadata. Ralph Kimball describes metadata as the DNA of the data warehouse as metadata defines the elements of the data warehouse and how they work together.
Kimball et al. refers to 3 main categories of metadata: Technical metadata, business metadata and process metadata. Technical metadata is primarily definitional, while business metadata and process metadata is primarily descriptive. The categories sometimes overlap.
Technical metadata defines the objects and processes in a DW/BI system, as seen from a technical point of view. The technical metadata includes the system metadata, which defines the data structures such as tables, fields, data types, indexes, and partitions in the relational engine, as well as databases, dimensions, measures, and data mining models. Technical metadata defines the data model and the way it is displayed for the users, with the reports, schedules, distribution lists, and user security rights.
Business metadata is content from the data warehouse described in more user-friendly terms. The business metadata tells you what data you have, where they come from, what they mean and what their relationship is to other data in the data warehouse. Business metadata may also serve as documentation for the DW/BI system. Users who browse the data warehouse are primarily viewing the business metadata.
Process metadata is used to describe the results of various operations in the data warehouse. Within the ETL process, all key data from tasks is logged on execution. This includes start time, end time, CPU seconds used, disk reads, disk writes, and rows processed. When troubleshooting the ETL or query process, this sort of data becomes valuable. Process metadata is the fact measurement when building and using a DW/BI system. Some organizations make a living out of collecting and selling this sort of data to companies – in that case, the process metadata becomes the business metadata for the fact and dimension tables. Collecting process metadata is in the interest of business people who can use the data to identify the users of their products, which products they are using, and what level of service they are receiving.
Internet
The HTML format used to define web pages allows for the inclusion of a variety of types of metadata, from basic descriptive text, dates and keywords to further advanced metadata schemes such as the Dublin Core, e-GMS, and AGLS standards. Pages and files can also be geotagged with coordinates, categorized or tagged, including collaboratively such as with folksonomies.
When media has identifiers set or when such can be generated, information such as file tags and descriptions can be pulled or scraped from the Internet – for example about movies. Various online databases are aggregated and provide metadata for various data. The collaboratively built Wikidata has identifiers not just for media but also abstract concepts, various objects, and other entities, that can be looked up by humans and machines to retrieve useful information and to link knowledge in other knowledge bases and databases.
Metadata may be included in the page's header or in a separate file. Microformats allow metadata to be added to on-page data in a way that regular web users do not see, but computers, web crawlers and search engines can readily access. Many search engines are cautious about using metadata in their ranking algorithms because of exploitation of metadata and the practice of search engine optimization, SEO, to improve rankings. See the Meta element article for further discussion. This cautious attitude may be justified as people, according to Doctorow, are not executing care and diligence when creating their own metadata and that metadata is part of a competitive environment where the metadata is used to promote the metadata creators own purposes. Studies show that search engines respond to web pages with metadata implementations, and Google has an announcement on its site showing the meta tags that its search engine understands. Enterprise search startup Swiftype recognizes metadata as a relevance signal that webmasters can implement for their website-specific search engine, even releasing their own extension, known as Meta Tags 2.
Broadcast industry
In the broadcast industry, metadata is linked to audio and video broadcast media to:
identify the media: clip or playlist names, duration, timecode, etc.
describe the content: notes regarding the quality of video content, rating, description (for example, during a sport event, keywords like goal, red card will be associated to some clips)
classify media: metadata allows producers to sort the media or to easily and quickly find a video content (a TV news could urgently need some archive content for a subject). For example, the BBC has a large subject classification system, Lonclass, a customized version of the more general-purpose Universal Decimal Classification.
This metadata can be linked to the video media thanks to the video servers. Most major broadcast sporting events like FIFA World Cup or the Olympic Games use this metadata to distribute their video content to TV stations through keywords. It is often the host broadcaster who is in charge of organizing metadata through its International Broadcast Centre and its video servers. This metadata is recorded with the images and entered by metadata operators (loggers) who associate in live metadata available in metadata grids through software (such as Multicam(LSM) or IPDirector used during the FIFA World Cup or Olympic Games).
Geography
Metadata that describes geographic objects in electronic storage or format (such as datasets, maps, features, or documents with a geospatial component) has a history dating back to at least 1994. This class of metadata is described more fully on the geospatial metadata article.
Ecology and environment
Ecological and environmental metadata is intended to document the "who, what, when, where, why, and how" of data collection for a particular study. This typically means which organization or institution collected the data, what type of data, which date(s) the data was collected, the rationale for the data collection, and the methodology used for the data collection. Metadata should be generated in a format commonly used by the most relevant science community, such as Darwin Core, Ecological Metadata Language, or Dublin Core. Metadata editing tools exist to facilitate metadata generation (e.g. Metavist, Mercury, Morpho). Metadata should describe the provenance of the data (where they originated, as well as any transformations the data underwent) and how to give credit for (cite) the data products.
Digital music
When first released in 1982, Compact Discs only contained a Table Of Contents (TOC) with the number of tracks on the disc and their length in samples. Fourteen years later in 1996, a revision of the CD Red Book standard added CD-Text to carry additional metadata. But CD-Text was not widely adopted. Shortly thereafter, it became common for personal computers to retrieve metadata from external sources (e.g. CDDB, Gracenote) based on the TOC.
Digital audio formats such as digital audio files superseded music formats such as cassette tapes and CDs in the 2000s. Digital audio files could be labeled with more information than could be contained in just the file name. That descriptive information is called the audio tag or audio metadata in general. Computer programs specializing in adding or modifying this information are called tag editors. Metadata can be used to name, describe, catalog, and indicate ownership or copyright for a digital audio file, and its presence makes it much easier to locate a specific audio file within a group, typically through use of a search engine that accesses the metadata. As different digital audio formats were developed, attempts were made to standardize a specific location within the digital files where this information could be stored.
As a result, almost all digital audio formats, including mp3, broadcast wav, and AIFF files, have similar standardized locations that can be populated with metadata. The metadata for compressed and uncompressed digital music is often encoded in the ID3 tag. Common editors such as TagLib support MP3, Ogg Vorbis, FLAC, MPC, Speex, WavPack TrueAudio, WAV, AIFF, MP4, and ASF file formats.
Cloud applications
With the availability of cloud applications, which include those to add metadata to content, metadata is increasingly available over the Internet.
Administration and management
Storage
Metadata can be stored either internally, in the same file or structure as the data (this is also called embedded metadata), or externally, in a separate file or field from the described data. A data repository typically stores the metadata detached from the data but can be designed to support embedded metadata approaches. Each option has advantages and disadvantages:
Internal storage means metadata always travels as part of the data they describe; thus, metadata is always available with the data, and can be manipulated locally. This method creates redundancy (precluding normalization), and does not allow managing all of a system's metadata in one place. It arguably increases consistency, since the metadata is readily changed whenever the data is changed.
External storage allows collocating metadata for all the contents, for example in a database, for more efficient searching and management. Redundancy can be avoided by normalizing the metadata's organization. In this approach, metadata can be united with the content when information is transferred, for example in Streaming media; or can be referenced (for example, as a web link) from the transferred content. On the downside, the division of the metadata from the data content, especially in standalone files that refer to their source metadata elsewhere, increases the opportunities for misalignments between the two, as changes to either may not be reflected in the other.
Metadata can be stored in either human-readable or binary form. Storing metadata in a human-readable format such as XML can be useful because users can understand and edit it without specialized tools. However, text-based formats are rarely optimized for storage capacity, communication time, or processing speed. A binary metadata format enables efficiency in all these respects, but requires special software to convert the binary information into human-readable content.
Database management
Each relational database system has its own mechanisms for storing metadata. Examples of relational-database metadata include:
Tables of all tables in a database, their names, sizes, and number of rows in each table.
Tables of columns in each database, what tables they are used in, and the type of data stored in each column.
In database terminology, this set of metadata is referred to as the catalog. The SQL standard specifies a uniform means to access the catalog, called the information schema, but not all databases implement it, even if they implement other aspects of the SQL standard. For an example of database-specific metadata access methods, see Oracle metadata. Programmatic access to metadata is possible using APIs such as JDBC, or SchemaCrawler.
Popular culture
One of the first satirical examinations of the concept of Metadata as we understand it today is American science fiction author Hal Draper's short story, "MS Fnd in a Lbry" (1961). Here, the knowledge of all Mankind is condensed into an object the size of a desk drawer, however, the magnitude of the metadata (e.g. catalog of catalogs of... , as well as indexes and histories) eventually leads to dire yet humorous consequences for the human race. The story prefigures the modern consequences of allowing metadata to become more important than the real data it is concerned with, and the risks inherent in that eventuality as a cautionary tale.
| Technology | Basics_4 | null |
18934432 | https://en.wikipedia.org/wiki/Cryptography | Cryptography | Cryptography, or cryptology (from "hidden, secret"; and graphein, "to write", or -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.
Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted. Information-theoretically secure schemes that cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement disputes with regard to digital media.
Terminology
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story by Edgar Allan Poe.
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks.
There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key. Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard). Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century.
In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English, while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis. English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. advises that steganography is sometimes included in cryptology.
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.
History
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communications, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others.
Classic cryptography
The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair. Other steganography methods involve 'hiding in plain sight,' such as using a music cipher to disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information.
In India, the 2000-year-old Kama Sutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.
In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries.
David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, nearly all such ciphers could be broken by an informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). The Arab mathematician and polymath Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.
Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel that implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski.
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II. The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.
Early computer-era cryptography
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitive tasks, such as military code breaking (decryption). This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States. In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm. In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.
As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing. The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography. The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.
Modern cryptography
Claude Shannon's two papers, his 1948 paper on information theory, and especially his 1949 paper on cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography. His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis", and as having turned cryptography from an "art to a science". As a result of his contributions and work, he has been described as the "founding father of modern cryptography".
Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
Symmetric-key cryptography
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL.
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream.
Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt; this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Public-key cryptography
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key. A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key.
The X.509 standard defines the most commonly used format for public key certificates.
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm.
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments. Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA. In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange.
Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).
Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.
Cryptographic hash functions
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance). MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Cryptanalysis
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message. Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts. Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient. Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations. This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, rubber-hose cryptanalysis or torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
Cryptographic primitives
Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.
Cryptosystems
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols.
Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems, (like zero-knowledge proofs) and systems for secret sharing.
Lightweight cryptography
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security. Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology.
Applications
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys. But, some algorithms like BitLocker and VeraCrypt are generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. The C++ opensource encryption library OpenSSL provides free and opensource encryption software and tools. The most commonly used encryption cipher suit is AES, as it has hardware acceleration for all x86 based processors that has AES-NI. A close contender is ChaCha20-Poly1305, which is a stream cipher, however it is commonly used for mobile devices as they are ARM based which does not feature AES-NI instruction set extension.
Cybersecurity
Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS. "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in WhatsApp, Signal and Telegram.
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker. Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.
Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in.
Cryptocurrencies and cryptoeconomics
Cryptographic techniques enable cryptocurrency technologies, such as distributed ledger technologies (e.g., blockchains), which finance cryptoeconomics applications such as decentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to: cryptographic keys, cryptographic hash function, asymmetric (public key) encryption, Multi-Factor Authentication (MFA), End-to-End Encryption (E2EE), and Zero Knowledge Proofs (ZKP).
Legal issues
Prohibitions
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
Export controls
In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users do not realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
NSA involvement
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).
Digital rights management
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech.
Forced disclosure of encryption keys
In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
| Technology | Computing and information technology | null |
18934934 | https://en.wikipedia.org/wiki/Read-only%20memory | Read-only memory | Read-only memory (ROM) is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM cannot be electronically modified after the manufacture of the memory device. Read-only memory is useful for storing software that is rarely changed during the life of the system, also known as firmware. Software applications, such as video games, for programmable devices can be distributed as plug-in cartridges containing ROM.
Strictly speaking, read-only memory refers to hard-wired memory, such as diode matrix or a mask ROM integrated circuit (IC), that cannot be electronically changed after manufacture. Although discrete circuits can be altered in principle, through the addition of bodge wires and the removal or replacement of components, ICs cannot. Correction of errors, or updates to the software, require new devices to be manufactured and to replace the installed device.
Floating-gate ROM semiconductor memory in the form of erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM) and flash memory can be erased and re-programmed. But usually, this can only be done at relatively slow speeds, may require special equipment to achieve, and is typically only possible a certain number of times.
The term "ROM" is sometimes used to refer to a ROM device containing specific software or a file with software to be stored in a writable ROM device. For example, users modifying or replacing the Android operating system describe files containing a modified or replacement operating system as "custom ROMs" after the type of storage the file used to be written to, and they may distinguish between ROM (where software and data is stored, usually Flash memory) and RAM.
ROM and RAM are essential components of a computer, each serving distinct roles. RAM, or Random Access Memory, is a temporary, volatile storage medium that loses data when the system powers down. In contrast, ROM, being non-volatile, preserves its data even after the computer is switched off.
History
Discrete-component ROM
IBM used capacitor read-only storage (CROS) and transformer read-only storage (TROS) to store microcode for the smaller System/360 models, the 360/85, and the initial two System/370 models (370/155 and 370/165). On some models there was also a writeable control store (WCS) for additional diagnostics and emulation support. The Apollo Guidance Computer used core rope memory, programmed by threading wires through magnetic cores.
Solid-state ROM
The simplest type of solid-state ROM is as old as the semiconductor technology itself. Combinational logic gates can be joined manually to map -bit address input onto arbitrary values of -bit data output (a look-up table). With the invention of the integrated circuit came mask ROM. Mask ROM consists of a grid of word lines (the address input) and bit lines (the data output), selectively joined with transistor switches, and can represent an arbitrary look-up table with a regular physical layout and predictable propagation delay. Mask ROM is programmed with photomasks in photolithography during semiconductor manufacturing. The mask defines physical features or structures that will be removed, or added in the ROM chips, and the presence or absence of these features will represent either a 1 or a 0 bit, depending on the ROM design. Thus by design, any attempts to electronically change the data will fail, since the data is defined by the presence or absence of physical features or structures that cannot be electronically changed. For every software program, even for revisions of the same program, the entire mask must be changed, which can be costly.
In mask ROM, the data is physically encoded in the circuit, so it can only be programmed during fabrication. This leads to a number of serious disadvantages:
It is only economical to buy mask ROM in large quantities, since users must contract with a foundry to produce a custom design for every piece, or revision of software.
The turnaround time between completing the design for a mask ROM and receiving the finished product is long, for the same reason.
Mask ROM is impractical for R&D work since designers frequently need to quickly modify the contents of memory as they refine a design.
If a product is shipped with faulty mask ROM, the only way to fix it is to recall the product and physically replace the ROM in every unit shipped. This has happened in the real world with a faulty carbon monoxide detector.
Subsequent developments have addressed these shortcomings. Programmable read-only memory (PROM), invented by Wen Tsing Chow in 1956, allowed users to program its contents exactly once by physically altering its structure with the application of high-voltage pulses. This addressed problems 1 and 2 above, since a company can simply order a large batch of fresh PROM chips and program them with the desired contents at its designers' convenience.
The advent of the metal–oxide–semiconductor field-effect transistor (MOSFET), invented at Bell Labs in 1959, enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements in semiconductor memory, a function previously served by magnetic cores in computer memory. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing erasable programmable read-only memory (EPROM) in 1971. The 1971 invention of EPROM essentially solved problem 3, since EPROM (unlike PROM) can be repeatedly reset to its unprogrammed state by exposure to strong ultraviolet light.
Electrically erasable programmable read-only memory (EEPROM), developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972, went a long way to solving problem 4, since an EEPROM can be programmed in-place if the containing device provides a means to receive the program contents from an external source (for example, a personal computer via a serial cable). Flash memory, invented by Fujio Masuoka at Toshiba in the early 1980s and commercialized in the late 1980s, is a form of EEPROM that makes very efficient use of chip area and can be erased and reprogrammed thousands of times without damage. It permits erasure and programming of only a specific part of the device, instead of the entire device. This can be done at high speed, hence the name "flash".
All of these technologies improved the flexibility of ROM, but at a significant cost-per-chip, so that in large quantities mask ROM would remain an economical choice for many years. (Decreasing cost of reprogrammable devices had almost eliminated the market for mask ROM by the year 2000.) Rewriteable technologies were envisioned as replacements for mask ROM.
The most recent development is NAND flash, also invented at Toshiba. Its designers explicitly broke from past practice, stating plainly that "the aim of NAND flash is to replace hard disks," rather than the traditional use of ROM as a form of non-volatile primary storage. , NAND has nearly completely achieved this goal by offering throughput higher than hard disks, lower latency, higher tolerance of physical shock, extreme miniaturization (in the form of USB flash drives and tiny microSD memory cards, for example), and much lower power consumption.
Use for storing programs
Many stored-program computers use a form of non-volatile storage (that is, storage that retains its data when power is removed) to store the initial program that runs when the computer is powered on or otherwise begins execution (a process known as bootstrapping, often abbreviated to "booting" or "booting up"). Likewise, every non-trivial computer needs some form of mutable memory to record changes in its state as it executes.
Forms of read-only memory were employed as non-volatile storage for programs in most early stored-program computers, such as ENIAC after 1948. (Until then it was not a stored-program computer as every program had to be manually wired into the machine, which could take days to weeks.) Read-only memory was simpler to implement since it needed only a mechanism to read stored values, and not to change them in-place, and thus could be implemented with very crude electromechanical devices (see historical examples below). With the advent of integrated circuits in the 1960s, both ROM and its mutable counterpart static RAM were implemented as arrays of transistors in silicon chips; however, a ROM memory cell could be implemented using fewer transistors than an SRAM memory cell, since the latter needs a latch (comprising 5-20 transistors) to retain its contents, while a ROM cell might consist of the absence (logical 0) or presence (logical 1) of one transistor connecting a bit line to a word line. Consequently, ROM could be implemented at a lower cost-per-bit than RAM for many years.
Most home computers of the 1980s stored a BASIC interpreter or operating system in ROM as other forms of non-volatile storage such as magnetic disk drives were too costly. For example, the Commodore 64 included 64 KB of RAM and 20 KB of ROM containing a BASIC interpreter and the KERNAL operating system. Later home or office computers such as the IBM PC XT often included magnetic disk drives, and larger amounts of RAM, allowing them to load their operating systems from disk into RAM, with only a minimal hardware initialization core and bootloader remaining in ROM (known as the BIOS in IBM-compatible computers). This arrangement allowed for a more complex and easily upgradeable operating system.
In modern PCs, "ROM" is used to store the basic bootstrapping firmware for the processor, as well as the various firmware needed to internally control self-contained devices such as graphic cards, hard disk drives, solid-state drives, optical disc drives, TFT screens, etc., in the system. Today, many of these "read-only" memories – especially the BIOS/UEFI – are often replaced with EEPROM or Flash memory (see below), to permit in-place reprogramming should the need for a firmware upgrade arise. However, simple and mature sub-systems (such as the keyboard or some communication controllers in the integrated circuits on the main board, for example) may employ mask ROM or OTP (one-time programmable).
ROM and successor technologies such as flash are prevalent in embedded systems. These are in everything from industrial robots to home appliances and consumer electronics (MP3 players, set-top boxes, etc.) all of which are designed for specific functions, but are based on general-purpose microprocessors. With software usually tightly coupled to hardware, program changes are rarely needed in such devices (which typically lack hard disks for reasons of cost, size, or power consumption). As of 2008, most products use Flash rather than mask ROM, and many provide some means for connecting to a PC for firmware updates; for example, a digital audio player might be updated to support a new file format. Some hobbyists have taken advantage of this flexibility to reprogram consumer products for new purposes; for example, the iPodLinux and OpenWrt projects have enabled users to run full-featured Linux distributions on their MP3 players and wireless routers, respectively.
ROM is also useful for binary storage of cryptographic data, as it makes them difficult to replace, which may be desirable in order to enhance information security.
Use for storing data
Since ROM (at least in hard-wired mask form) cannot be modified, it is only suitable for storing data which is not expected to need modification for the life of the device. To that end, ROM has been used in many computers to store look-up tables for the evaluation of mathematical and logical functions (for example, a floating-point unit might tabulate the sine function in order to facilitate faster computation). This was especially effective when CPUs were slow and ROM was cheap compared to RAM.
Notably, the display adapters of early personal computers stored tables of bitmapped font characters in ROM. This usually meant that the text display font could not be changed interactively. This was the case for both the CGA and MDA adapters available with the IBM PC XT.
The use of ROM to store such small amounts of data has disappeared almost completely in modern general-purpose computers. However, NAND Flash has taken over a new role as a medium for mass storage or secondary storage of files.
Types
Factory-programmed
Mask ROM is a read-only memory whose contents are programmed by the integrated circuit manufacturer (rather than by the user). The desired memory contents are furnished by the customer to the device manufacturer. The desired data is converted into a custom photomask/mask layer for the final metallization of interconnections on the memory chip (hence the name).
Mask ROM can be made in several ways, all of which aim to change the electrical response of a transistor when it is addressed on a grid, such as:
In a ROM with transistors in a NOR configuration, using a photomask to define only specific areas of a grid with transistors, to fill with metal thus connecting to the grid only part of all the transistors in the ROM chip thus making a grid where transistors that are connected cause a different electrical response when addressed, than spaces in the grid where the transistors are not connected, a connected transistor may represent a 1 and an unconnected one a 0, or viceversa. This is the least expensive, and fastest way of making mask ROM as it only needs one mask with data, and has the lowest density of all mask ROM types as it is done at the metallization layer, whose features can be relatively large in respect to other parts of the ROM. This is known as contact-programmed ROM. In ROM with a NAND configuration, this is known as metal-layer programming and the mask defines where to fill the areas surrounding transistors with metal which short-circuits the transistors instead, a transistor that is not short circuited may represent a 0, and one that is may represent a 1, or viceversa.
Using two masks to define two types of ion implantation regions for transistors, to change their electrical properties when addressed in a grid and define two types of transistors. The type of transistor defines if it represents a 1 or a 0 bit. One mask defines where to deposit one type of ion implantation (the "1" transistors), and another defines where to deposit the other (the "0" transistors). This is known as voltage threshold ROM (VTROM) as the different ion implantation types define different voltage thresholds in the transistors, and it's the voltage threshold on a transistor that defines a 0, or a 1. Can be used with NAND and NOR configurations. This technique offers a high level of resistance against optical reading of the contents as ion-implantation regions are difficult to distinguish optically, which may be attempted with decapping of the ROM and a microscope.
Using two levels of thickness for a gate oxide in transistors, and using a mask to define where to deposit one thickness of oxide, and another mask to deposit the other. Depending on the thickness a transistor can have different electrical properties and thus represent either a 1 or a 0.
Using several masks to define the presence or absence of the transistors themselves, on a grid. Addressing a non-existent transistor may be interpreted as a 0, and if a transistor is present it may be interpreted as a 1, or viceversa. This is known as active-layer programming.
Mask ROM transistors can be arranged in either NOR or NAND configurations and can achieve one of the smallest cell sizes possible as each bit is represented by only one transistor. NAND offers higher storage density than NOR. OR configurations are also possible, but compared to NOR it only connects transistors to Vcc instead of Vss. Mask ROMs used to be the most inexpensive, and are the simplest semiconductor memory devices, with only one metal layer and one polysilicon layer, making it the type of semiconductor memory with the highest manufacturing yield (the highest number of working devices per manufacturing run). ROM can be made using one of several semiconductor device fabrication technologies such as CMOS, nMOS, pMOS, and bipolar transistors.
It is common practice to use rewritable non-volatile memory – such as UV-EPROM or EEPROM – for the development phase of a project, and to switch to mask ROM when the code has been finalized. For example, Atmel microcontrollers come in both EEPROM and mask ROM formats.
The main advantage of mask ROM is its cost. Per bit, mask ROM was more compact than any other kind of semiconductor memory. Since the cost of an integrated circuit strongly depends on its size, mask ROM is significantly cheaper than any other kind of semiconductor memory.
However, the one-time masking cost is high and there is a long turn-around time from design to product phase. Design errors are costly: if an error in the data or code is found, the mask ROM is useless and must be replaced in order to change the code or data.
, four companies produce most such mask ROM chips: Samsung Electronics, NEC Corporation, Oki Electric Industry, and Macronix.
Some integrated circuits contain only mask ROM. Other integrated circuits contain mask ROM as well as a variety of other devices. In particular, many microprocessors have mask ROM to store their microcode. Some microcontrollers have mask ROM to store the bootloader or all of their firmware.
Classic mask-programmed ROM chips are integrated circuits that physically encode the data to be stored, and thus it is impossible to change their contents after fabrication.
It is also possible to write the contents of a Laser ROM by using a laser to alter the electrical properties of only some diodes on the ROM, or by using a laser to cut only some polysilicon links, instead of using a mask.
Field-programmable
Programmable read-only memory (PROM), or one-time programmable ROM (OTP), can be written to or programmed via a special device called a PROM programmer. Typically, this device uses high voltages to permanently destroy or create internal links (fuses or antifuses) within the chip. Consequently, a PROM can only be programmed once.
Erasable programmable read-only memory (EPROM) can be erased by exposure to strong ultraviolet light (typically for 10 minutes or longer), then rewritten with a process that again needs higher than usual voltage applied. Repeated exposure to UV light will eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000 cycles of erasing and reprogramming. EPROM chip packages can often be identified by the prominent quartz "window" which allows UV light to enter. After programming, the window is typically covered with a label to prevent accidental erasure. Some EPROM chips are factory-erased before they are packaged, and include no window; these are effectively PROM.
Electrically erasable programmable read-only memory (EEPROM) is based on a similar semiconductor structure to EPROM, but allows its entire contents (or selected banks) to be electrically erased, then rewritten electrically, so that they need not be removed from the computer (whether general-purpose or an embedded computer in a camera, MP3 player, etc.). Writing or flashing an EEPROM is much slower (milliseconds per bit) than reading from a ROM or writing to a RAM (nanoseconds in both cases).
Electrically alterable read-only memory (EAROM) is a type of EEPROM that can be modified one or a few bits at a time. Writing is a very slow process and again needs higher voltage (usually around 12 V) than is used for read access. EAROMs are intended for applications that require infrequent and only partial rewriting. EAROM may be used as non-volatile storage for critical system setup information; in many applications, EAROM has been supplanted by CMOS RAM supplied by mains power and backed up with a lithium battery.
Flash memory (or simply flash) is a modern type of EEPROM invented in 1984. Flash memory can be erased and rewritten faster than ordinary EEPROM, and newer designs feature very high endurance (exceeding 1,000,000 cycles). Modern NAND flash makes efficient use of silicon chip area, resulting in individual ICs with a capacity as high as 32 GB ; this feature, along with its endurance and physical durability, has allowed NAND flash to replace magnetic in some applications (such as USB flash drives). NOR flash memory is sometimes called flash ROM or flash EEPROM when used as a replacement for older ROM types, but not in applications that take advantage of its ability to be modified quickly and frequently.
By applying write protection, some types of reprogrammable ROMs may temporarily become read-only memory.
Other technologies
There are other types of non-volatile memory which are not based on solid-state IC technology, including:
Optical storage media, such CD-ROM which is read-only (analogous to masked ROM). CD-R is Write Once Read Many (analogous to PROM), while CD-RW supports erase-rewrite cycles (analogous to EEPROM); both are designed for backwards-compatibility with CD-ROM.
Diode matrix ROM, used in small amounts in many computers in the 1960s as well as electronic desk calculators and keyboard encoders for terminals. This ROM was programmed by installing discrete semiconductor diodes at selected locations between a matrix of word line traces and bit line traces on a printed circuit board.
Resistor or capacitor matrix ROM, used in many computers until the 1970s. Like diode matrix ROM, it was programmed by placing components at selected locations between a matrix of word lines and bit lines. ENIAC's Function Tables were resistor matrix ROM, programmed by manually setting rotary switches. Various models of the IBM System/360 and complex peripheral devices stored their microcode in a capacitor matrix, in variants called BCROS for balanced capacitor read-only storage on the 360/50 and 360/65, or CCROS for card capacitor read-only storage on the 360/30.
Transformer matrix ROM achieves higher density storage than diode, resistor, or capacitor matris ROMs, by using each matrix element to store multiple bits.
Dimond Ring Translator, named after Bell Labs inventor Thomas L. Dimond, in which wires are threaded through a sequence of large ferrite rings that function as transformers, coupling drive pulses to sense windings. Invented in the early 1940s, the Dimond Ring Translator was used in the #5 Crossbar Switch, and TXE telephone exchanges. Dimond Ring was the basis for most later forms of transformer-coupled or "core rope" memory.
Transformer Read Only Storage (TROS) on the 360/20, 360/40 and peripheral control units), is a transformer matrix ROM technology operating in the same way as the Dimond Ring Translator. It is faster and more compact than IBM's CCROS used in the IBM System/360 Model 30, but slower than IBM's BCROS used in the IBM System/360 Model 50 and Model 65.
Core rope memory, also known as wire braid memory, which couples drive lines to sense lines through ferrite cores, used where size, weight, and/or cost were critical. Core rope stores multiple bits of ROM per core (unlike normal read/write core memory), and was programmed by weaving "word line wires" inside or outside of ferrite transformer cores. Two different kinds of core rope memory, distinguished by whether the magnetization of the cores is flipped during operation, are known as the pulse-transformer technique and the switching-core technique
In the pulse-transformer technique, the drive lines are coupled to the sense lines through ferrite cores, but the core magnetization is not flipped, nor does this methoddepend on the magnetization hysteresis loop, using them only as transformers. This operates in the same way as the Dimond Ring Translator, and was used in DEC's PDP-9 and PDP-16 computers, the Hewlett-Packard 9100A and 9100B calculators, Wang calculators, and many other machines.
The switching-core technique does flip the magnetization of the ferrite cores. This is significantly different than the operation of a Dimond Ring Translator. This was used in NASA/MIT's Apollo Spacecraft Computers,
Inductively coupled printed circuit board memory, which uses inductive coupling but no ferrite cores, instead coupling between drive lines and sense lines on separate planes of a printed circuit board. This operates on the same principle as the Dimond Ring Translator, and was used in the Hewlett-Packard 9100A and 9100B calculators for the main control store (in addition to a pules-transformer core rope memory used for the microinstruction decoder).
Speed
Although the relative speed of RAM vs. ROM has varied over time, large RAM chips can be read faster than most ROMs. For this reason (and to allow uniform access), ROM content is sometimes copied to RAM or shadowed before its first use, and subsequently read from RAM.
Writing
For those types of ROM that can be electrically modified, writing speed has traditionally been much slower than reading speed, and it may need unusually high voltage, the movement of jumper plugs to apply write-enable signals, and special lock/unlock command codes. Modern NAND Flash can be used to achieve the highest write speeds of any rewritable ROM technology, with speeds as high as 10 GB/s in an SSD. This has been enabled by the increased investment in both consumer and enterprise solid-state drives and flash memory products for higher end mobile devices. On a technical level the gains have been achieved by increasing parallelism both in controller design and of storage, the use of large DRAM read/write caches and the implementation of memory cells which can store more than one bit (DLC, TLC and MLC). The latter approach is more failure prone but this has been largely mitigated by overprovisioning (the inclusion of spare capacity in a product which is visible only to the drive controller) and by increasingly sophisticated read/write algorithms in drive firmware.
Endurance and data retention
Because they are written by forcing electrons through a layer of electrical insulation onto a floating transistor gate, rewriteable ROMs can withstand only a limited number of write and erase cycles before the insulation is permanently damaged. In the earliest EPROMs, this might occur after as few as 1,000 write cycles, while in modern Flash EEPROM the endurance may exceed 1,000,000. The limited endurance, as well as the higher cost per bit, means that Flash-based storage is unlikely to completely supplant magnetic disk drives in the near future.
The timespan over which a ROM remains accurately readable is not limited by write cycling. The data retention of EPROM, EAROM, EEPROM, and Flash may be time-limited by charge leaking from the floating gates of the memory cell transistors. Early generation EEPROM's, in the mid-1980s generally cited 5 or 6 year data retention. A review of EEPROM's offered in the year 2020 shows manufacturers citing 100 year data retention. Adverse environments will reduce the retention time (leakage is accelerated by high temperatures or radiation). Masked ROM and fuse/antifuse PROM do not suffer from this effect, as their data retention depends on physical rather than electrical permanence of the integrated circuit, although fuse re-growth was once a problem in some systems.
Content images
The contents of ROM chips can be extracted with special hardware devices and relevant controlling software. This practice is common for, as a main example, reading the contents of older video game console cartridges. Another example is making backups of firmware/OS ROMs from older computers or other devices - for archival purposes, as in many cases, the original chips are PROMs and thus at risk of exceeding their usable data lifetime.
The resultant memory dump files are known as ROM images or abbreviated ROMs, and can be used to produce duplicate ROMs - for example to produce new cartridges or as digital files for playing in console emulators. The term ROM image originated when most console games were distributed on cartridges containing ROM chips, but achieved such widespread usage that it is still applied to images of newer games distributed on CD-ROMs or other optical media.
ROM images of commercial games, firmware, etc. usually contain copyrighted software. The unauthorized copying and distribution of copyrighted software is a violation of copyright laws in many jurisdictions, although duplication for backup purposes may be considered fair use depending on location. In any case, there is a thriving community engaged in the distribution and trading of such software for preservation/sharing purposes.
Timeline
| Technology | Data storage | null |
18935488 | https://en.wikipedia.org/wiki/Reverse%20engineering | Reverse engineering | Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accomplishes a task with very little (if any) insight into exactly how it does so. Depending on the system under consideration and the technologies employed, the knowledge gained during reverse engineering can help with repurposing obsolete objects, doing security analysis, or learning how something works.
Although the process is specific to the object on which it is being performed, all reverse engineering processes consist of three basic steps: information extraction, modeling, and review. Information extraction is the practice of gathering all relevant information for performing the operation. Modeling is the practice of combining the gathered information into an abstract model, which can be used as a guide for designing the new object or system. Review is the testing of the model to ensure the validity of the chosen abstract. Reverse engineering is applicable in the fields of computer engineering, mechanical engineering, design, electronic engineering, software engineering, chemical engineering, and systems biology.
Overview
There are many reasons for performing reverse engineering in various fields. Reverse engineering has its origins in the analysis of hardware for commercial or military advantage. However, the reverse engineering process may not always be concerned with creating a copy or changing the artifact in some way. It may be used as part of an analysis to deduce design features from products with little or no additional knowledge about the procedures involved in their original production.
In some cases, the goal of the reverse engineering process can simply be a redocumentation of legacy systems. Even when the reverse-engineered product is that of a competitor, the goal may not be to copy it but to perform competitor analysis. Reverse engineering may also be used to create interoperable products and despite some narrowly-tailored United States and European Union legislation, the legality of using specific reverse engineering techniques for that purpose has been hotly contested in courts worldwide for more than two decades.
Software reverse engineering can help to improve the understanding of the underlying source code for the maintenance and improvement of the software, relevant information can be extracted to make a decision for software development and graphical representations of the code can provide alternate views regarding the source code, which can help to detect and fix a software bug or vulnerability. Frequently, as some software develops, its design information and improvements are often lost over time, but that lost information can usually be recovered with reverse engineering. The process can also help to cut down the time required to understand the source code, thus reducing the overall cost of the software development. Reverse engineering can also help to detect and to eliminate a malicious code written to the software with better code detectors. Reversing a source code can be used to find alternate uses of the source code, such as detecting the unauthorized replication of the source code where it was not intended to be used, or revealing how a competitor's product was built. That process is commonly used for "cracking" software and media to remove their copy protection, or to create a possibly-improved copy or even a knockoff, which is usually the goal of a competitor or a hacker.
Malware developers often use reverse engineering techniques to find vulnerabilities in an operating system to build a computer virus that can exploit the system vulnerabilities. Reverse engineering is also being used in cryptanalysis to find vulnerabilities in substitution cipher, symmetric-key algorithm or public-key cryptography.
There are other uses to reverse engineering:
Games.Reverse engineering in the context of games and game engines is often used to understand underlying mechanics, data structures, and proprietary protocols, allowing developers to create mods, custom tools, or to enhance compatibility. This practice is particularly useful when interfacing with existing systems to improve interoperability between different game components, engines, or platforms. Platforms like Reshax provide tools and resources that assist in analyzing game binaries, dissecting game engine behavior, thus contributing to a deeper understanding of game technology and enabling community-driven enhancements.
Interfacing. Reverse engineering can be used when a system is required to interface to another system and how both systems would negotiate is to be established. Such requirements typically exist for interoperability.
Military or commercial espionage. Learning about an enemy's or competitor's latest research by stealing or capturing a prototype and dismantling it may result in the development of a similar product or a better countermeasure against it.
Obsolescence. Integrated circuits are often designed on proprietary systems and built on production lines, which become obsolete in only a few years. When systems using those parts can no longer be maintained since the parts are no longer made, the only way to incorporate the functionality into new technology is to reverse-engineer the existing chip and then to redesign it using newer tools by using the understanding gained as a guide. Another obsolescence originated problem that can be solved by reverse engineering is the need to support (maintenance and supply for continuous operation) existing legacy devices that are no longer supported by their original equipment manufacturer. The problem is particularly critical in military operations.
Product security analysis. That examines how a product works by determining the specifications of its components and estimate costs and identifies potential patent infringement. Also part of product security analysis is acquiring sensitive data by disassembling and analyzing the design of a system component. Another intent may be to remove copy protection or to circumvent access restrictions.
Competitive technical intelligence. That is to understand what one's competitor is actually doing, rather than what it says that it is doing.
Saving money. Finding out what a piece of electronics can do may spare a user from purchasing a separate product.
Repurposing. Obsolete objects are then reused in a different-but-useful manner.
Design. Production and design companies applied Reverse Engineering to practical craft-based manufacturing process. The companies can work on "historical" manufacturing collections through 3D scanning, 3D re-modeling and re-design. In 2013 Italian manufactures Baldi and Savio Firmino together with University of Florence optimized their innovation, design, and production processes.
Common uses
Machines
As computer-aided design (CAD) has become more popular, reverse engineering has become a viable method to create a 3D virtual model of an existing physical part for use in 3D CAD, CAM, CAE, or other software. The reverse-engineering process involves measuring an object and then reconstructing it as a 3D model. The physical object can be measured using 3D scanning technologies like CMMs, laser scanners, structured light digitizers, or industrial CT scanning (computed tomography). The measured data alone, usually represented as a point cloud, lacks topological information and design intent. The former may be recovered by converting the point cloud to a triangular-faced mesh. Reverse engineering aims to go beyond producing such a mesh and to recover the design intent in terms of simple analytical surfaces where appropriate (planes, cylinders, etc.) as well as possibly NURBS surfaces to produce a boundary-representation CAD model. Recovery of such a model allows a design to be modified to meet new requirements, a manufacturing plan to be generated, etc.
Hybrid modeling is a commonly used term when NURBS and parametric modeling are implemented together. Using a combination of geometric and freeform surfaces can provide a powerful method of 3D modeling. Areas of freeform data can be combined with exact geometric surfaces to create a hybrid model. A typical example of this would be the reverse engineering of a cylinder head, which includes freeform cast features, such as water jackets and high-tolerance machined areas.
Reverse engineering is also used by businesses to bring existing physical geometry into digital product development environments, to make a digital 3D record of their own products, or to assess competitors' products. It is used to analyze how a product works, what it does, what components it has; estimate costs; identify potential patent infringement; etc.
Value engineering, a related activity that is also used by businesses, involves deconstructing and analyzing products. However, the objective is to find opportunities for cost-cutting.
Printed circuit boards
Reverse engineering of printed circuit boards involves recreating fabrication data for a particular circuit board. This is done primarily to identify a design, and learn the functional and structural characteristics of a design. It also allows for the discovery of the design principles behind a product, especially if this design information is not easily available.
Outdated PCBs are often subject to reverse engineering, especially when they perform highly critical functions such as powering machinery, or other electronic components. Reverse engineering these old parts can allow the reconstruction of the PCB if it performs some crucial task, as well as finding alternatives which provide the same function, or in upgrading the old PCB.
Reverse engineering PCBs largely follow the same series of steps. First, images are created by drawing, scanning, or taking photographs of the PCB. Then, these images are ported to suitable reverse engineering software in order to create a rudimentary design for the new PCB. The quality of these images that is necessary for suitable reverse engineering is proportional to the complexity of the PCB itself. More complicated PCBs require well lighted photos on dark backgrounds, while fairly simple PCBs can be recreated simply with just basic dimensioning. Each layer of the PCB is carefully recreated in the software with the intent of producing a final design as close to the initial. Then, the schematics for the circuit are finally generated using an appropriate tool.
Software
In 1990, the Institute of Electrical and Electronics Engineers (IEEE) defined (software) reverse engineering (SRE) as "the process of analyzing a
subject system to identify the system's components and their interrelationships and to create representations of the system in another form or at a higher
level of abstraction" in which the "subject system" is the end product of software development. Reverse engineering is a process of examination only, and the software system under consideration is not modified, which would otherwise be re-engineering or restructuring. Reverse engineering can be performed from any stage of the product cycle, not necessarily from the functional end product.
There are two components in reverse engineering: redocumentation and design recovery. Redocumentation is the creation of new representation of the computer code so that it is easier to understand. Meanwhile, design recovery is the use of deduction or reasoning from general knowledge or personal experience of the product to understand the product's functionality fully. It can also be seen as "going backwards through the development cycle". In this model, the output of the implementation phase (in source code form) is reverse-engineered back to the analysis phase, in an inversion of the traditional waterfall model. Another term for this technique is program comprehension. The Working Conference on Reverse Engineering (WCRE) has been held yearly to explore and expand the techniques of reverse engineering. Computer-aided software engineering (CASE) and automated code generation have contributed greatly in the field of reverse engineering.
Software anti-tamper technology like obfuscation is used to deter both reverse engineering and re-engineering of proprietary software and software-powered systems. In practice, two main types of reverse engineering emerge. In the first case, source code is already available for the software, but higher-level aspects of the program, which are perhaps poorly documented or documented but no longer valid, are discovered. In the second case, there is no source code available for the software, and any efforts towards discovering one possible source code for the software are regarded as reverse engineering. The second usage of the term is more familiar to most people. Reverse engineering of software can make use of the clean room design technique to avoid copyright infringement.
On a related note, black box testing in software engineering has a lot in common with reverse engineering. The tester usually has the API but has the goals to find bugs and undocumented features by bashing the product from outside.
Other purposes of reverse engineering include security auditing, removal of copy protection ("cracking"), circumvention of access restrictions often present in consumer electronics, customization of embedded systems (such as engine management systems), in-house repairs or retrofits, enabling of additional features on low-cost "crippled" hardware (such as some graphics card chip-sets), or even mere satisfaction of curiosity.
Binary software
Binary reverse engineering is performed if source code for a software is unavailable. This process is sometimes termed reverse code engineering, or RCE. For example, decompilation of binaries for the Java platform can be accomplished by using Jad. One famous case of reverse engineering was the first non-IBM implementation of the PC BIOS, which launched the historic IBM PC compatible industry that has been the overwhelmingly-dominant computer hardware platform for many years. Reverse engineering of software is protected in the US by the fair use exception in copyright law. The Samba software, which allows systems that do not run Microsoft Windows systems to share files with systems that run it, is a classic example of software reverse engineering since the Samba project had to reverse-engineer unpublished information about how Windows file sharing worked so that non-Windows computers could emulate it. The Wine project does the same thing for the Windows API, and OpenOffice.org is one party doing that for the Microsoft Office file formats. The ReactOS project is even more ambitious in its goals by striving to provide binary (ABI and API) compatibility with the current Windows operating systems of the NT branch, which allows software and drivers written for Windows to run on a clean-room reverse-engineered free software (GPL) counterpart. WindowsSCOPE allows for reverse-engineering the full contents of a Windows system's live memory including a binary-level, graphical reverse engineering of all running processes.
Another classic, if not well-known, example is that in 1987 Bell Laboratories reverse-engineered the Mac OS System 4.1, originally running on the Apple Macintosh SE, so that it could run it on RISC machines of their own.
Binary software techniques
Reverse engineering of software can be accomplished by various methods.
The three main groups of software reverse engineering are
Analysis through observation of information exchange, most prevalent in protocol reverse engineering, which involves using bus analyzers and packet sniffers, such as for accessing a computer bus or computer network connection and revealing the traffic data thereon. Bus or network behavior can then be analyzed to produce a standalone implementation that mimics that behavior. That is especially useful for reverse engineering device drivers. Sometimes, reverse engineering on embedded systems is greatly assisted by tools deliberately introduced by the manufacturer, such as JTAG ports or other debugging means. In Microsoft Windows, low-level debuggers such as SoftICE are popular.
Disassembly using a disassembler, meaning the raw machine language of the program is read and understood in its own terms, only with the aid of machine-language mnemonics. It works on any computer program but can take quite some time, especially for those who are not used to machine code. The Interactive Disassembler is a particularly popular tool.
Decompilation using a decompiler, a process that tries, with varying results, to recreate the source code in some high-level language for a program only available in machine code or bytecode.
Software classification
Software classification is the process of identifying similarities between different software binaries (such as two different versions of the same binary) used to detect code relations between software samples. The task was traditionally done manually for several reasons (such as patch analysis for vulnerability detection and copyright infringement), but it can now be done somewhat automatically for large numbers of samples.
This method is being used mostly for long and thorough reverse engineering tasks (complete analysis of a complex algorithm or big piece of software). In general, statistical classification is considered to be a hard problem, which is also true for software classification, and so few solutions/tools that handle this task well.
Source code
A number of UML tools refer to the process of importing and analysing source code to generate UML diagrams as "reverse engineering". See List of UML tools.
Although UML is one approach in providing "reverse engineering" more recent advances in international standards activities have resulted in the development of the Knowledge Discovery Metamodel (KDM). The standard delivers an ontology for the intermediate (or abstracted) representation of programming language constructs and their interrelationships. An Object Management Group standard (on its way to becoming an ISO standard as well), KDM has started to take hold in industry with the development of tools and analysis environments that can deliver the extraction and analysis of source, binary, and byte code. For source code analysis, KDM's granular standards' architecture enables the extraction of software system flows (data, control, and call maps), architectures, and business layer knowledge (rules, terms, and process). The standard enables the use of a common data format (XMI) enabling the correlation of the various layers of system knowledge for either detailed analysis (such as root cause, impact) or derived analysis (such as business process extraction). Although efforts to represent language constructs can be never-ending because of the number of languages, the continuous evolution of software languages, and the development of new languages, the standard does allow for the use of extensions to support the broad language set as well as evolution. KDM is compatible with UML, BPMN, RDF, and other standards enabling migration into other environments and thus leverage system knowledge for efforts such as software system transformation and enterprise business layer analysis.
Protocols
Protocols are sets of rules that describe message formats and how messages are exchanged: the protocol state machine. Accordingly, the problem of protocol reverse-engineering can be partitioned into two subproblems: message format and state-machine reverse-engineering.
The message formats have traditionally been reverse-engineered by a tedious manual process, which involved analysis of how protocol implementations process messages, but recent research proposed a number of automatic solutions. Typically, the automatic approaches group observe messages into clusters by using various clustering analyses, or they emulate the protocol implementation tracing the message processing.
There has been less work on reverse-engineering of state-machines of protocols. In general, the protocol state-machines can be learned either through a process of offline learning, which passively observes communication and attempts to build the most general state-machine accepting all observed sequences of messages, and online learning, which allows interactive generation of probing sequences of messages and listening to responses to those probing sequences. In general, offline learning of small state-machines is known to be NP-complete, but online learning can be done in polynomial time. An automatic offline approach has been demonstrated by Comparetti et al. and an online approach by Cho et al.
Other components of typical protocols, like encryption and hash functions, can be reverse-engineered automatically as well. Typically, the automatic approaches trace the execution of protocol implementations and try to detect buffers in memory holding unencrypted packets.
Integrated circuits/smart cards
Reverse engineering is an invasive and destructive form of analyzing a smart card. The attacker uses chemicals to etch away layer after layer of the smart card and takes pictures with a scanning electron microscope (SEM). That technique can reveal the complete hardware and software part of the smart card. The major problem for the attacker is to bring everything into the right order to find out how everything works. The makers of the card try to hide keys and operations by mixing up memory positions, such as by bus scrambling.
In some cases, it is even possible to attach a probe to measure voltages while the smart card is still operational. The makers of the card employ sensors to detect and prevent that attack. That attack is not very common because it requires both a large investment in effort and special equipment that is generally available only to large chip manufacturers. Furthermore, the payoff from this attack is low since other security techniques are often used such as shadow accounts. It is still uncertain whether attacks against chip-and-PIN cards to replicate encryption data and then to crack PINs would provide a cost-effective attack on multifactor authentication.
Full reverse engineering proceeds in several major steps.
The first step after images have been taken with a SEM is stitching the images together, which is necessary because each layer cannot be captured by a single shot. A SEM needs to sweep across the area of the circuit and take several hundred images to cover the entire layer. Image stitching takes as input several hundred pictures and outputs a single properly-overlapped picture of the complete layer.
Next, the stitched layers need to be aligned because the sample, after etching, cannot be put into the exact same position relative to the SEM each time. Therefore, the stitched versions will not overlap in the correct fashion, as on the real circuit. Usually, three corresponding points are selected, and a transformation applied on the basis of that.
To extract the circuit structure, the aligned, stitched images need to be segmented, which highlights the important circuitry and separates it from the uninteresting background and insulating materials.
Finally, the wires can be traced from one layer to the next, and the netlist of the circuit, which contains all of the circuit's information, can be reconstructed.
Military applications
Reverse engineering is often used by people to copy other nations' technologies, devices, or information that have been obtained by regular troops in the fields or by intelligence operations. It was often used during the Second World War and the Cold War. Here are well-known examples from the Second World War and later:
Jerry can: British and American forces in WW2 noticed that the Germans had gasoline cans with an excellent design. They reverse-engineered copies of those cans, which cans were popularly known as "Jerry cans".
Nakajima G5N: In 1939, the U.S. Douglas Aircraft Company sold its DC-4E airliner prototype to Imperial Japanese Airways, which was secretly acting as a front for the Imperial Japanese Navy, which wanted a long-range strategic bomber but had been hindered by the Japanese aircraft industry's inexperience with heavy long-range aircraft. The DC-4E was transferred to the Nakajima Aircraft Company and dismantled for study; as a cover story, the Japanese press reported that it had crashed in Tokyo Bay. The wings, engines, and landing gear of the G5N were copied directly from the DC-4E.
Panzerschreck: The Germans captured an American bazooka during the Second World War and reverse engineered it to create the larger Panzerschreck.
Tupolev Tu-4: In 1944, three American B-29 bombers on missions over Japan were forced to land in the Soviet Union. The Soviets, who did not have a similar strategic bomber, decided to copy the B-29. Within three years, they had developed the Tu-4, a nearly-perfect copy.
SCR-584 radar: copied by the Soviet Union after the Second World War, it is known for a few modifications - СЦР-584, Бинокль-Д.
V-2 rocket: Technical documents for the V-2 and related technologies were captured by the Western Allies at the end of the war. The Americans focused their reverse engineering efforts via Operation Paperclip, which led to the development of the PGM-11 Redstone rocket. The Soviets used captured German engineers to reproduce technical documents and plans and worked from captured hardware to make their clone of the rocket, the R-1. Thus began the postwar Soviet rocket program, which led to the R-7 and the beginning of the space race.
K-13/R-3S missile (NATO reporting name AA-2 Atoll), a Soviet reverse-engineered copy of the AIM-9 Sidewinder, was made possible after a Taiwanese (ROCAF) AIM-9B hit a Chinese PLA MiG-17 without exploding in September 1958. The missile became lodged within the airframe, and the pilot returned to base with what Soviet scientists would describe as a university course in missile development.
Toophan missile: In May 1975, negotiations between Iran and Hughes Missile Systems on co-production of the BGM-71 TOW and Maverick missiles stalled over disagreements in the pricing structure, the subsequent 1979 revolution ending all plans for such co-production. Iran was later successful in reverse-engineering the missile and now produces its own copy, the Toophan.
China has reversed engineered many examples of Western and Russian hardware, from fighter aircraft to missiles and HMMWV cars, such as the MiG-15,17,19,21 (which became the J-2,5,6,7) and the Su-33 (which became the J-15).
During the Second World War, Polish and British cryptographers studied captured German "Enigma" message encryption machines for weaknesses. Their operation was then simulated on electromechanical devices, "bombes", which tried all the possible scrambler settings of the "Enigma" machines that helped the breaking of coded messages that had been sent by the Germans.
Also during the Second World War, British scientists analyzed and defeated a series of increasingly-sophisticated radio navigation systems used by the Luftwaffe to perform guided bombing missions at night. The British countermeasures to the system were so effective that in some cases, German aircraft were led by signals to land at RAF bases since they believed that they had returned to German territory.
Gene networks
Reverse engineering concepts have been applied to biology as well, specifically to the task of understanding the structure and function of gene regulatory networks. They regulate almost every aspect of biological behavior and allow cells to carry out physiological processes and responses to perturbations. Understanding the structure and the dynamic behavior of gene networks is therefore one of the paramount challenges of systems biology, with immediate practical repercussions in several applications that are beyond basic research.
There are several methods for reverse engineering gene regulatory networks by using molecular biology and data science methods. They have been generally divided into six classes:
Coexpression methods are based on the notion that if two genes exhibit a similar expression profile, they may be related although no causation can be simply inferred from coexpression.
Sequence motif methods analyze gene promoters to find specific transcription factor binding domains. If a transcription factor is predicted to bind a promoter of a specific gene, a regulatory connection can be hypothesized.
Chromatin ImmunoPrecipitation (ChIP) methods investigate the genome-wide profile of DNA binding of chosen transcription factors to infer their downstream gene networks.
Orthology methods transfer gene network knowledge from one species to another.
Literature methods implement text mining and manual research to identify putative or experimentally-proven gene network connections.
Transcriptional complexes methods leverage information on protein-protein interactions between transcription factors, thus extending the concept of gene networks to include transcriptional regulatory complexes.
Often, gene network reliability is tested by genetic perturbation experiments followed by dynamic modelling, based on the principle that removing one network node has predictable effects on the functioning of the remaining nodes of the network.
Applications of the reverse engineering of gene networks range from understanding mechanisms of plant physiology to the highlighting of new targets for anticancer therapy.
Overlap with patent law
Reverse engineering applies primarily to gaining understanding of a process or artifact in which the manner of its construction, use, or internal processes has not been made clear by its creator.
Patented items do not of themselves have to be reverse-engineered to be studied, for the essence of a patent is that inventors provide a detailed public disclosure themselves, and in return receive legal protection of the invention that is involved. However, an item produced under one or more patents could also include other technology that is not patented and not disclosed. Indeed, one common motivation of reverse engineering is to determine whether a competitor's product contains patent infringement or copyright infringement.
Legality
United States
In the United States, even if an artifact or process is protected by trade secrets, reverse-engineering the artifact or process is often lawful if it has been legitimately obtained.
Reverse engineering of computer software often falls under both contract law as a breach of contract as well as any other relevant laws. That is because most end-user license agreements specifically prohibit it, and US courts have ruled that if such terms are present, they override the copyright law that expressly permits it (see Bowers v. Baystate Technologies). According to Section 103(f) of the Digital Millennium Copyright Act (17 U.S.C. § 1201 (f)), a person in legal possession of a program may reverse-engineer and circumvent its protection if that is necessary to achieve "interoperability", a term that broadly covers other devices and programs that can interact with it, make use of it, and to use and transfer data to and from it in useful ways. A limited exemption exists that allows the knowledge thus gained to be shared and used for interoperability purposes.
European Union
EU Directive 2009/24 on the legal protection of computer programs, which superseded an earlier (1991) directive, governs reverse engineering in the European Union.
| Technology | Basics | null |
18935713 | https://en.wikipedia.org/wiki/Photocopier | Photocopier | A photocopier (also called copier or copy machine, and formerly Xerox machine, the generic trademark) is a machine that makes copies of documents and other visual images onto paper or plastic film quickly and cheaply. Most modern photocopiers use a technology called xerography, a dry process that uses electrostatic charges on a light-sensitive photoreceptor to first attract and then transfer toner particles (a powder) onto paper in the form of an image. The toner is then fused onto the paper using heat, pressure, or a combination of both. Copiers can also use other technologies, such as inkjet, but xerography is standard for office copying.
Commercial xerographic office photocopying gradually replaced copies made by verifax, photostat, carbon paper, mimeograph machines, and other duplicating machines.
Photocopying is widely used in the business, education, and government sectors. While there have been predictions that photocopiers will eventually become obsolete as information workers increase their use of digital document creation, storage, and distribution and rely less on distributing actual pieces of paper, as of 2015, photocopiers continue to be widely used. During the 1980s, a convergence began in some high-end machines towards what came to be called a multi-function printer: a device that combined the roles of a photocopier, a fax machine, a scanner, and a computer network-connected printer. Low-end machines that can copy and print in color have increasingly dominated the home-office market as their prices fell steadily during the 1990s. High-end color photocopiers capable of heavy-duty handling cycles and large-format printing remain a costly option found primarily in print and design shops.
History
Chester Carlson (1906-1968), the inventor of photocopying, was originally a patent attorney, as well as a part-time researcher and inventor. His job at the patent office in New York required him to make a large number of copies of important papers. Carlson, who was arthritic, found this a painful and tedious process. This motivated him to conduct experiments with photoconductivity. Carlson used his kitchen for his "electrophotography" experiments, and, in 1938, he applied for a patent for the process. He made the first photocopy using a zinc plate covered with sulfur. The words "10-22-38 Astoria" were written on a microscope slide, which was placed on top of more sulfur and under a bright light. After the slide was removed, a mirror image of the words remained. Carlson tried to sell his invention to some companies but failed because the process was still underdeveloped. At the time, multiple copies were most commonly made at the point of document origination, using carbon paper or manual duplicating machines. People did not see the need for an electronic copier. Between 1939 and 1944, Carlson was turned down by over 20 companies, including IBM and General Electric—neither of which believed there was a significant market for copiers.
In 1944, the Battelle Memorial Institute, a non-profit organization in Columbus, Ohio, contracted with Carlson to refine his new process. Over the next five years, the institute conducted experiments to improve the process of electrophotography. In 1947, Haloid Corporation, a manufacturer of photographic paper, approached Battelle to obtain a license to develop and market a copying machine based on this technology.
Haloid felt that the word "electrophotography" was too complicated and did not have good recall value. After consulting a professor of classical language at Ohio State University, Haloid and Carlson changed the name of the process to xerography, a term, coined from Greek roots, that meant "dry writing." Haloid called the new copier machines "Xerox Machines" and, in 1948, the term Xerox was trademarked. Haloid eventually became Xerox Corporation in 1961.
In 1949, Xerox Corporation introduced the first xerographic copier, called the Model A. Seeing off computing-leader IBM in the office-copying market, Xerox became so successful that, in North America, photocopying came to be popularly known as "xeroxing". Xerox has actively fought to prevent Xerox from becoming a genericized trademark. While the word Xerox has appeared in some dictionaries as a synonym for photocopying, Xerox Corporation typically requests such entries be modified, and discourages use of the term Xerox in this way.
In the early 1950s, Radio Corporation of America (RCA) introduced a variation on the process called Electrofax, whereby images are formed directly on specially coated paper and rendered with a toner dispersed in a liquid.
During the 1960s and through the 1980s, Savin Corporation developed and sold a line of liquid-toner copiers that implemented a technology based on patents held by the company.
Before the widespread adoption of xerographic copiers, photo-direct copies produced by machines such as Kodak's Verifax (based on a 1947 patent) were used. A primary obstacle associated with the pre-xerographic copying technologies was the high cost of supplies: a Verifax print required supplies costing US$0.15 in 1969, while a Xerox print could be made for $0.03, including paper and labor. The coin-operated Photostat machines still found in some public libraries in the late 1960s made letter-size copies for $0.25 each, when the minimum wage for a US worker was $1.65 per hour; the Xerox machines that replaced them typically charged $0.10.
Xerographic-copier manufacturers took advantage of the high perceived value copying had in the 1960s and early 1970s and marketed "specially designed" paper for xerographic output. By the end of the 1970s, paper producers made xerographic "runability" one of the requirements for most of their office-paper brands.
Some devices sold as photocopiers have replaced the drum-based process with inkjet or transfer-film technology.
Among the key advantages of photocopiers over earlier copying technologies is their ability:
to use plain (untreated) office paper
to implement duplex (two-sided) printing
to scan several pages automatically with an ADF
eventually, to sort and/or staple output
In 1970, Paul Orfalea founded Kinko's retail chain, in Isla Vista, California. Starting with a single copier that year, this copy service chain would expand to over 1,000 locations around the world. By the 1980s, Kinko's operated 24 hours a day, 7 days a week, with customers using the copy center for academic and business work as well as personal publishing and advertising. By the 1990s, Kinko's had 700 locations around the United States, with 5 in Manhattan. In such urban areas, Kinko's became a place where a multitude of users could make their ideas "typed, designed and xeroxed, then transmitted by fax, computer disk and Federal Express." Kate Eichhorn, in Adjusted Margin: Xerography, Art, and Activism in the Late Twentieth Century, notes that during this period (1970s through 1990s) the copy machine played "an especially notable role in the era's punk, street art, and DIY movements." FedEx purchased the Kinko's chain in 2004, and its services were incorporated into the name FedEx Office in 2008.
Color photocopiers
Colored toner became available in the 1940s, although full-color copiers were not commercially available until 1968, when 3M released the Color-in-Color copier, which used a dye sublimation process rather than conventional electrostatic technology. Xerox introduced the first electrostatic color-copier (the 6500) in 1973. Color photocopying is a concern to governments, as it facilitates counterfeiting currency and other documents: for more information, see .
Digital technology
There is an increasing trend for new photocopiers to implement digital technology, thereby replacing the older analog technology. With digital copying, the copier effectively consists of an integrated scanner and laser printer. This design has several advantages, such as automatic image-quality enhancement and the ability to "build jobs" (that is, to scan page images independently of printing them). Some digital copiers can function as high-speed scanners; such models typically offer the ability to send documents via email or make them available on file servers.
A significant advantage of digital copier technology is "automatic digital collation". For example, when copying a set of 20 pages 20 times, a digital copier scans each page only once, then uses the stored information to produce 20 sets. In an analog copier, either each page is scanned 20 times (a total of 400 scans), making one set at a time, or 20 separate output trays are used for the 20 sets.
Low-end copiers also use digital technology, but tend to consist of a standard PC scanner coupled to an inkjet or low-end laser printer, which are far slower than their counterparts in high-end copiers. However, low-end scanner-inkjets can provide color copying at a lower upfront purchase-price but a much higher cost per copy. Combined digital scanner/printers sometimes have built-in fax machines and can be classified as one type of multifunction printer.
How it works (using xerography)
Charging: cylindrical drum is electrostatically charged by a high voltage wire called a corona wire or a charge roller. The drum has a coating of a photoconductive material. A photoconductor is a semiconductor that becomes conductive when exposed to light.
Exposure: A bright lamp illuminates the original document, and the white areas of the original document reflect the light onto the surface of the photoconductive drum. The drum areas that are exposed to light become conductive and therefore discharge to the ground. The drum area not exposed to light (those areas that correspond to black portions of the original document) remains negatively charged.
Developing: The toner is positively charged. When it is applied to the drum to develop the image, it is attracted and sticks to the negatively charged areas (black areas), just as paper sticks to a balloon with a static charge.
Transfer: The resulting toner image on the surface of the drum is transferred from the drum onto a piece of paper that has an even greater negative charge than the drum has.
Fusing: The toner is melted and bonded to the paper by heat and pressure rollers.
A negative photocopy inverts the document's colors when creating a photocopy, resulting in letters that appear white on a black background instead of black on a white background. Negative photocopies of old or faded documents sometimes produce documents that have better focus and are easier to read and study.
Copyright issues
Photocopying material that is subject to copyright (such as books or scientific papers) is subject to restrictions in most countries. This is common practice, as the cost of purchasing a book for the sake of one article or a few pages can be excessive. The principle of fair use (in the United States) or fair dealing (in other Berne Convention countries) allows copying for certain specified purposes.
In certain countries, such as Canada, some universities pay royalties from each photocopy made at university copy machines and copy centers to copyright collectives out of the revenues from the photocopying, and these collectives distribute resulting funds to various scholarly publishers. In the United States, photocopied compilations of articles, handouts, graphics, and other information called readers often require texts for college classes. Either the instructor or the copy center is responsible for clearing copyright for every article in the reader, and attribution information must be clearly included in the reader.
Counterfeiting
To counter the risk of people using color copiers to create counterfeit copies of paper currency, some countries have incorporated anti-counterfeiting technologies into their currency. These include watermarks, microprinting, holograms, tiny security strips made of plastic (or other material), and ink that appears to change color as the currency is viewed at an angle. Some photocopying machines contain special software that can prevent copying currency that has a special pattern.
Color copying also raises concerns regarding the copying and/or forging of other documents, such as driver's licenses and university degrees and transcripts. Some driver's licenses are made with embedded holograms so that a police officer can detect a fake copy. Some university and college transcripts have special anti-copying watermarks in the background. If a copy is made, the watermarks will become highly visible, which allows the recipient to determine that they have a copy rather than a genuine original transcript.
Health issues
Exposure to ultraviolet light is a concern. In the early days of photocopiers, the sensitizing light source was filtered green to match the optimal sensitivity of the photoconductive surface. This filtering conveniently removed all ultraviolet. Currently, a variety of light sources are used. As glass transmits ultraviolet rays between 325 and 400 nanometers, copiers with ultraviolet-producing lights such as fluorescent, tungsten halogen, or xenon flash, expose documents to some ultraviolet.
Concerns about emissions from photocopy machines have been expressed by some in connection with the use of selenium and emissions of ozone and fumes from heated toner.
Forensic identification
Similar to forensic identification of typewriters, computer printers and copiers can be traced by imperfections in their output. The mechanical tolerances of the toner and paper feed mechanisms cause banding, which can reveal information about the individual device's mechanical properties. It is often possible to identify the manufacturer and brand, and, in some cases, the individual printer can be identified from a set of known printers by comparing their outputs.
Some high-quality color printers and copiers steganographically embed their identification code into the printed pages, as fine and almost invisible patterns of yellow dots. Some sources identify Xerox and Canon as companies doing this. The Electronic Frontier Foundation (EFF) has investigated this issue and documented how the Xerox DocuColor printer's serial number, as well as the date and time of the printout, are encoded in a repeating 8×15 dot pattern in the yellow channel. EFF is working to reverse engineer additional printers. The EFF also reports that the US government has asked these companies to implement such a tracking scheme, so that counterfeiting can be traced. The EFF has filed a Freedom of Information Act request in order to look into privacy implications of this tracking.
Wet photocopying
Photocopying, using liquid developer, was developed by Ken Metcalfe and Bob Wright of Defence Standards Laboratory in Adelaide in 1952.
Photocopying, using liquid developer, was used in 1967.
Images from 'wet photocopying' do not last as long as dry toner images, but this is not due to acidity.
| Technology | Media and communication | null |
5041589 | https://en.wikipedia.org/wiki/Autocollimator | Autocollimator | An autocollimator is an optical instrument for non-contact measurement of angles. They are typically used to align components and measure deflections in optical or mechanical systems. An autocollimator works by projecting an image onto a target mirror and measuring the deflection of the returned image against a scale, either visually or by means of an electronic detector. A visual autocollimator can measure angles as small as 1 arcsecond (4.85 microradians), while an electronic autocollimator can have up to 100 times more resolution.
Visual autocollimators are often used for aligning laser rod ends and checking the face parallelism of optical windows and wedges. Electronic and digital autocollimators are used as angle measurement standards, for monitoring angular movement over long periods of time and for checking angular position repeatability in mechanical systems. Servo autocollimators are specialized compact forms of electronic autocollimators that are used in high-speed servo-feedback loops for stable-platform applications. An electronic autocollimator is typically calibrated to read the actual mirror angle.
Electronic autocollimator
The electronic autocollimator is a high precision angle measurement instrument capable of measuring angular deviations with accuracy down to fractions of an arcsecond, by electronic means only, with no optical eye-piece.
Measuring with an electronic autocollimator is fast, easy, accurate, and will frequently be the most cost effective procedure. Used extensively in workshops, tool rooms, inspection departments and quality control laboratories worldwide, these highly sensitive instruments will measure extremely small angular displacements, squareness, twist and parallelism.
Laser analyzing autocollimator
Today, a new technology allows to improve the autocollimation instrument to allow direct measurements of incoming laser beams. This new capability opens a gate of inter-alignment between optics, mirrors and lasers.
This technology fusion between a century-old technology of autocollimation with recent laser technology offers a very versatile instrument capable of measurement of inter-alignment between multiple line of sights, laser in respect to mechanical datum, alignment of laser cavity, measurement of multiple rollers parallelism in roll to roll machinery, laser divergence angle and its spatial stability and many more inter-alignment applications.
Total station autocollimator
The concept of autocollimation as an optical instrument was conceived about a century ago for non-contact measurements of angles. Hybrid technology fulfills a need recently developed by novel photonics applications has created for the alignment and measurement of optics and lasers. Implementing motorized focusing offers an additional measurement dimension by focusing on the area to be examined and performing alignment and deviations from alignment on the scale of microns. This is relevant in the adjustment phase as well as final testing and examination phases of integrated systems. Recent progress has been made in with the aim to serve the photonics AR/VR industry, involving development in interalingment, fusion of several wavelengths including NIR into one system, and measurements of multi laser array such as VCSEL in respect with other optical sensors, to improve angular accurate optical measurements to a resolution of 0.01 arcseconds.
Typical applications
An electronic autocollimator can be used in the measurement of straightness of machine components (such as guide ways) or the straightness of lines of motion of machine components. Flatness measurement of granite surface plates, for example, can be performed by measuring straightness of multiple lines along the flat surface, then summing the deviations in line angle over the surface. Recent advancements in applications allow angular orientation measurement of wafers. This could also be done without obstructing lines of sight to the wafer's surface itself. It is applicable in wafer measuring machines and wafer processing machines. Other applications include:
Aircraft assembly jigs
Satellite testing
Steam and gas turbines
Marine propulsion machinery
Printing presses
Air compressors
Cranes
Diesel engines
Nuclear reactors
Coal conveyors
Shipbuilding and repair
Rolling mills
Rod and wire mills
Extruder barrels
Optical measurement applications:
Retroreflector measurement
Roof prism measurement
Optical assembly procedures
Alignment of beam delivery systems
Alignment of laser cavity
Testing perpendicularity of laser rods in respect to its axis
Real time measurement of angular stability of mirror elements.
| Technology | Optical instruments | null |
5042951 | https://en.wikipedia.org/wiki/Climate%20change | Climate change | Present-day climate change includes both global warming—the ongoing increase in global average temperature—and its wider effects on Earth’s climate system. Climate change in a broader sense also includes previous long-term changes to Earth's climate. The current rise in global temperatures is driven by human activities, especially fossil fuel burning since the Industrial Revolution. Fossil fuel use, deforestation, and some agricultural and industrial practices release greenhouse gases. These gases absorb some of the heat that the Earth radiates after it warms from sunlight, warming the lower atmosphere. Carbon dioxide, the primary gas driving global warming, has increased in concentration by about 50% since the pre-industrial era to levels not seen for millions of years.
Climate change has an increasingly large impact on the environment. Deserts are expanding, while heat waves and wildfires are becoming more common. Amplified warming in the Arctic has contributed to thawing permafrost, retreat of glaciers and sea ice decline. Higher temperatures are also causing more intense storms, droughts, and other weather extremes. Rapid environmental change in mountains, coral reefs, and the Arctic is forcing many species to relocate or become extinct. Even if efforts to minimize future warming are successful, some effects will continue for centuries. These include ocean heating, ocean acidification and sea level rise.
Climate change threatens people with increased flooding, extreme heat, increased food and water scarcity, more disease, and economic loss. Human migration and conflict can also be a result. The World Health Organization calls climate change one of the biggest threats to global health in the 21st century. Societies and ecosystems will experience more severe risks without action to limit warming. Adapting to climate change through efforts like flood control measures or drought-resistant crops partially reduces climate change risks, although some limits to adaptation have already been reached. Poorer communities are responsible for a small share of global emissions, yet have the least ability to adapt and are most vulnerable to climate change.
Many climate change impacts have been observed in the first decades of the 21st century, with 2024 the warmest on record at + since regular tracking began in 1850. Additional warming will increase these impacts and can trigger tipping points, such as melting all of the Greenland ice sheet. Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2 °C". However, with pledges made under the Agreement, global warming would still reach about by the end of the century. Limiting warming to 1.5 °C would require halving emissions by 2030 and achieving net-zero emissions by 2050.
Fossil fuel use can be phased out by conserving energy and switching to energy sources that do not produce significant carbon pollution. These energy sources include wind, solar, hydro, and nuclear power. Cleanly generated electricity can replace fossil fuels for powering transportation, heating buildings, and running industrial processes. Carbon can also be removed from the atmosphere, for instance by increasing forest cover and farming with methods that capture carbon in soil.
Terminology
Before the 1980s it was unclear whether the warming effect of increased greenhouse gases was stronger than the cooling effect of airborne particulates in air pollution. Scientists used the term inadvertent climate modification to refer to human impacts on the climate at this time. In the 1980s, the terms global warming and climate change became more common, often being used interchangeably. Scientifically, global warming refers only to increased surface warming, while climate change describes both global warming and its effects on Earth's climate system, such as precipitation changes.
Climate change can also be used more broadly to include changes to the climate that have happened throughout Earth's history. Global warming—used as early as 1975—became the more popular term after NASA climate scientist James Hansen used it in his 1988 testimony in the U.S. Senate. Since the 2000s, climate change has increased usage. Various scientists, politicians and media may use the terms climate crisis or climate emergency to talk about climate change, and may use the term global heating instead of global warming.
Global temperature rise
Temperatures prior to present-day global warming
Over the last few million years the climate cycled through ice ages. One of the hotter periods was the Last Interglacial, around 125,000 years ago, where temperatures were between 0.5 °C and 1.5 °C warmer than before the start of global warming. This period saw sea levels 5 to 10 metres higher than today. The most recent glacial maximum 20,000 years ago was some 5–7 °C colder. This period has sea levels that were over lower than today.
Temperatures stabilized in the current interglacial period beginning 11,700 years ago. This period also saw the start of agriculture. Historical patterns of warming and cooling, like the Medieval Warm Period and the Little Ice Age, did not occur at the same time across different regions. Temperatures may have reached as high as those of the late 20th century in a limited set of regions. Climate information for that period comes from climate proxies, such as trees and ice cores.
Warming since the Industrial Revolution
Around 1850 thermometer records began to provide global coverage.
Between the 18th century and 1970 there was little net warming, as the warming impact of greenhouse gas emissions was offset by cooling from sulfur dioxide emissions. Sulfur dioxide causes acid rain, but it also produces sulfate aerosols in the atmosphere, which reflect sunlight and cause global dimming. After 1970, the increasing accumulation of greenhouse gases and controls on sulfur pollution led to a marked increase in temperature.
Ongoing changes in climate have had no precedent for several thousand years. Multiple independent datasets all show worldwide increases in surface temperature, at a rate of around 0.2 °C per decade. The 2014–2023 decade warmed to an average 1.19 °C [1.06–1.30 °C] compared to the pre-industrial baseline (1850–1900). Not every single year was warmer than the last: internal climate variability processes can make any year 0.2 °C warmer or colder than the average. From 1998 to 2013, negative phases of two such processes, Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) caused a short slower period of warming called the "global warming hiatus". After the "hiatus", the opposite occurred, with 2024 well above the recent average at more than +1.5 °C. This is why the temperature change is defined in terms of a 20-year average, which reduces the noise of hot and cold years and decadal climate patterns, and detects the long-term signal.
A wide range of other observations reinforce the evidence of warming. The upper atmosphere is cooling, because greenhouse gases are trapping heat near the Earth's surface, and so less heat is radiating into space. Warming reduces average snow cover and forces the retreat of glaciers. At the same time, warming also causes greater evaporation from the oceans, leading to more atmospheric humidity, more and heavier precipitation. Plants are flowering earlier in spring, and thousands of animal species have been permanently moving to cooler areas.
Differences by region
Different regions of the world warm at different rates. The pattern is independent of where greenhouse gases are emitted, because the gases persist long enough to diffuse across the planet. Since the pre-industrial period, the average surface temperature over land regions has increased almost twice as fast as the global average surface temperature. This is because oceans lose more heat by evaporation and oceans can store a lot of heat. The thermal energy in the global climate system has grown with only brief pauses since at least 1970, and over 90% of this extra energy has been stored in the ocean. The rest has heated the atmosphere, melted ice, and warmed the continents.
The Northern Hemisphere and the North Pole have warmed much faster than the South Pole and Southern Hemisphere. The Northern Hemisphere not only has much more land, but also more seasonal snow cover and sea ice. As these surfaces flip from reflecting a lot of light to being dark after the ice has melted, they start absorbing more heat. Local black carbon deposits on snow and ice also contribute to Arctic warming. Arctic surface temperatures are increasing between three and four times faster than in the rest of the world. Melting of ice sheets near the poles weakens both the Atlantic and the Antarctic limb of thermohaline circulation, which further changes the distribution of heat and precipitation around the globe.
Future global temperatures
The World Meteorological Organization estimates there is almost a 50% chance of the five-year average global temperature exceeding +1.5 °C between 2024 and 2028. The IPCC expects the 20-year average to exceed +1.5 °C in the early 2030s.
The IPCC Sixth Assessment Report (2021) included projections that by 2100 global warming is very likely to reach 1.0–1.8 °C under a scenario with very low emissions of greenhouse gases, 2.1–3.5 °C under an intermediate emissions scenario,
or 3.3–5.7 °C under a very high emissions scenario. The warming will continue past 2100 in the intermediate and high emission scenarios, with future projections of global surface temperatures by year 2300 being similar to millions of years ago.
The remaining carbon budget for staying beneath certain temperature increases is determined by modelling the carbon cycle and climate sensitivity to greenhouse gases. According to UNEP, global warming can be kept below 1.5 °C with a 50% chance if emissions after 2023 do not exceed 200 gigatonnes of . This corresponds to around 4 years of current emissions. To stay under 2.0 °C, the carbon budget is 900 gigatonnes of , or 16 years of current emissions.
Causes of recent global temperature rise
The climate system experiences various cycles on its own which can last for years, decades or even centuries. For example, El Niño events cause short-term spikes in surface temperature while La Niña events cause short term cooling. Their relative frequency can affect global temperature trends on a decadal timescale. Other changes are caused by an imbalance of energy from external forcings. Examples of these include changes in the concentrations of greenhouse gases, solar luminosity, volcanic eruptions, and variations in the Earth's orbit around the Sun.
To determine the human contribution to climate change, unique "fingerprints" for all potential causes are developed and compared with both observed patterns and known internal climate variability. For example, solar forcing—whose fingerprint involves warming the entire atmosphere—is ruled out because only the lower atmosphere has warmed. Atmospheric aerosols produce a smaller, cooling effect. Other drivers, such as changes in albedo, are less impactful.
Greenhouse gases
Greenhouse gases are transparent to sunlight, and thus allow it to pass through the atmosphere to heat the Earth's surface. The Earth radiates it as heat, and greenhouse gases absorb a portion of it. This absorption slows the rate at which heat escapes into space, trapping heat near the Earth's surface and warming it over time.
While water vapour (≈50%) and clouds (≈25%) are the biggest contributors to the greenhouse effect, they primarily change as a function of temperature and are therefore mostly considered to be feedbacks that change climate sensitivity. On the other hand, concentrations of gases such as (≈20%), tropospheric ozone, CFCs and nitrous oxide are added or removed independently from temperature, and are therefore considered to be external forcings that change global temperatures.
Before the Industrial Revolution, naturally-occurring amounts of greenhouse gases caused the air near the surface to be about 33 °C warmer than it would have been in their absence. Human activity since the Industrial Revolution, mainly extracting and burning fossil fuels (coal, oil, and natural gas), has increased the amount of greenhouse gases in the atmosphere. In 2022, the concentrations of and methane had increased by about 50% and 164%, respectively, since 1750. These levels are higher than they have been at any time during the last 14 million years. Concentrations of methane are far higher than they were over the last 800,000 years.
Global human-caused greenhouse gas emissions in 2019 were equivalent to 59 billion tonnes of . Of these emissions, 75% was , 18% was methane, 4% was nitrous oxide, and 2% was fluorinated gases. emissions primarily come from burning fossil fuels to provide energy for transport, manufacturing, heating, and electricity. Additional emissions come from deforestation and industrial processes, which include the released by the chemical reactions for making cement, steel, aluminum, and fertilizer. Methane emissions come from livestock, manure, rice cultivation, landfills, wastewater, and coal mining, as well as oil and gas extraction. Nitrous oxide emissions largely come from the microbial decomposition of fertilizer.
While methane only lasts in the atmosphere for an average of 12 years, lasts much longer. The Earth's surface absorbs as part of the carbon cycle. While plants on land and in the ocean absorb most excess emissions of every year, that is returned to the atmosphere when biological matter is digested, burns, or decays. Land-surface carbon sink processes, such as carbon fixation in the soil and photosynthesis, remove about 29% of annual global emissions. The ocean has absorbed 20 to 30% of emitted over the last two decades. is only removed from the atmosphere for the long term when it is stored in the Earth's crust, which is a process that can take millions of years to complete.
Land surface changes
Around 30% of Earth's land area is largely unusable for humans (glaciers, deserts, etc.), 26% is forests, 10% is shrubland and 34% is agricultural land. Deforestation is the main land use change contributor to global warming, as the destroyed trees release , and are not replaced by new trees, removing that carbon sink. Between 2001 and 2018, 27% of deforestation was from permanent clearing to enable agricultural expansion for crops and livestock. Another 24% has been lost to temporary clearing under the shifting cultivation agricultural systems. 26% was due to logging for wood and derived products, and wildfires have accounted for the remaining 23%. Some forests have not been fully cleared, but were already degraded by these impacts. Restoring these forests also recovers their potential as a carbon sink.
Local vegetation cover impacts how much of the sunlight gets reflected back into space (albedo), and how much heat is lost by evaporation. For instance, the change from a dark forest to grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also modify the release of chemical compounds that influence clouds, and by changing wind patterns. In tropic and temperate areas the net effect is to produce significant warming, and forest restoration can make local temperatures cooler. At latitudes closer to the poles, there is a cooling effect as forest is replaced by snow-covered (and more reflective) plains. Globally, these increases in surface albedo have been the dominant direct influence on temperature from land use change. Thus, land use change to date is estimated to have a slight cooling effect.
Other factors
Aerosols and clouds
Air pollution, in the form of aerosols, affects the climate on a large scale. Aerosols scatter and absorb solar radiation. From 1961 to 1990, a gradual reduction in the amount of sunlight reaching the Earth's surface was observed. This phenomenon is popularly known as global dimming, and is primarily attributed to sulfate aerosols produced by the combustion of fossil fuels with heavy sulfur concentrations like coal and bunker fuel. Smaller contributions come from black carbon (from combustion of fossil fuels and biomass), and from dust. Globally, aerosols have been declining since 1990 due to pollution controls, meaning that they no longer mask greenhouse gas warming as much.
Aerosols also have indirect effects on the Earth's energy budget. Sulfate aerosols act as cloud condensation nuclei and lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets. They also reduce the growth of raindrops, which makes clouds more reflective to incoming sunlight. Indirect effects of aerosols are the largest uncertainty in radiative forcing.
While aerosols typically limit global warming by reflecting sunlight, black carbon in soot that falls on snow or ice can contribute to global warming. Not only does this increase the absorption of sunlight, it also increases melting and sea-level rise. Limiting new black carbon deposits in the Arctic could reduce global warming by 0.2 °C by 2050. The effect of decreasing sulfur content of fuel oil for ships since 2020 is estimated to cause an additional 0.05 °C increase in global mean temperature by 2050.
Solar and volcanic activity
As the Sun is the Earth's primary energy source, changes in incoming sunlight directly affect the climate system. Solar irradiance has been measured directly by satellites, and indirect measurements are available from the early 1600s onwards. Since 1880, there has been no upward trend in the amount of the Sun's energy reaching the Earth, in contrast to the warming of the lower atmosphere (the troposphere). The upper atmosphere (the stratosphere) would also be warming if the Sun was sending more energy to Earth, but instead, it has been cooling.
This is consistent with greenhouse gases preventing heat from leaving the Earth's atmosphere.
Explosive volcanic eruptions can release gases, dust and ash that partially block sunlight and reduce temperatures, or they can send water vapour into the atmosphere, which adds to greenhouse gases and increases temperatures. These impacts on temperature only last for several years, because both water vapour and volcanic material have low persistence in the atmosphere. volcanic emissions are more persistent, but they are equivalent to less than 1% of current human-caused emissions. Volcanic activity still represents the single largest natural impact (forcing) on temperature in the industrial era. Yet, like the other natural forcings, it has had negligible impacts on global temperature trends since the Industrial Revolution.
Climate change feedbacks
The climate system's response to an initial forcing is shaped by feedbacks, which either amplify or dampen the change. Self-reinforcing or positive feedbacks increase the response, while balancing or negative feedbacks reduce it. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds. The primary balancing mechanism is radiative cooling, as Earth's surface gives off more heat to space in response to rising temperature. In addition to temperature feedbacks, there are feedbacks in the carbon cycle, such as the fertilizing effect of on plant growth. Feedbacks are expected to trend in a positive direction as greenhouse gas emissions continue, raising climate sensitivity.
These feedback processes alter the pace of global warming. For instance, warmer air can hold more moisture in the form of water vapour, which is itself a potent greenhouse gas. Warmer air can also make clouds higher and thinner, and therefore more insulating, increasing climate warming. The reduction of snow cover and sea ice in the Arctic is another major feedback, this reduces the reflectivity of the Earth's surface in the region and accelerates Arctic warming. This additional warming also contributes to permafrost thawing, which releases methane and into the atmosphere.
Around half of human-caused emissions have been absorbed by land plants and by the oceans. This fraction is not static and if future emissions decrease, the Earth will be able to absorb up to around 70%. If they increase substantially, it'll still absorb more carbon than now, but the overall fraction will decrease to below 40%. This is because climate change increases droughts and heat waves that eventually inhibit plant growth on land, and soils will release more carbon from dead plants when they are warmer. The rate at which oceans absorb atmospheric carbon will be lowered as they become more acidic and experience changes in thermohaline circulation and phytoplankton distribution. Uncertainty over feedbacks, particularly cloud cover, is the major reason why different climate models project different magnitudes of warming for a given amount of emissions.
Modelling
A climate model is a representation of the physical, chemical and biological processes that affect the climate system. Models include natural processes like changes in the Earth's orbit, historical changes in the Sun's activity, and volcanic forcing. Models are used to estimate the degree of warming future emissions will cause when accounting for the strength of climate feedbacks. Models also predict the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.
The physical realism of models is tested by examining their ability to simulate current or past climates. Past models have underestimated the rate of Arctic shrinkage and underestimated the rate of precipitation increase. Sea level rise since 1990 was underestimated in older models, but more recent models agree well with observations. The 2017 United States-published National Climate Assessment notes that "climate models may still be underestimating or missing relevant feedback processes". Additionally, climate models may be unable to adequately predict short-term regional climatic shifts.
A subset of climate models add societal factors to a physical climate model. These models simulate how population, economic growth, and energy use affect—and interact with—the physical climate. With this information, these models can produce scenarios of future greenhouse gas emissions. This is then used as input for physical climate models and carbon cycle models to predict how atmospheric concentrations of greenhouse gases might change. Depending on the socioeconomic scenario and the mitigation scenario, models produce atmospheric concentrations that range widely between 380 and 1400 ppm.
Impacts
Environmental effects
The environmental effects of climate change are broad and far-reaching, affecting oceans, ice, and weather. Changes may occur gradually or rapidly. Evidence for these effects comes from studying climate change in the past, from modelling, and from modern observations. Since the 1950s, droughts and heat waves have appeared simultaneously with increasing frequency. Extremely wet or dry events within the monsoon period have increased in India and East Asia. Monsoonal precipitation over the Northern Hemisphere has increased since 1980. The rainfall rate and intensity of hurricanes and typhoons is likely increasing, and the geographic range likely expanding poleward in response to climate warming. Frequency of tropical cyclones has not increased as a result of climate change.
Global sea level is rising as a consequence of thermal expansion and the melting of glaciers and ice sheets. Sea level rise has increased over time, reaching 4.8 cm per decade between 2014 and 2023. Over the 21st century, the IPCC projects 32–62 cm of sea level rise under a low emission scenario, 44–76 cm under an intermediate one and 65–101 cm under a very high emission scenario. Marine ice sheet instability processes in Antarctica may add substantially to these values, including the possibility of a 2-meter sea level rise by 2100 under high emissions.
Climate change has led to decades of shrinking and thinning of the Arctic sea ice. While ice-free summers are expected to be rare at 1.5 °C degrees of warming, they are set to occur once every three to ten years at a warming level of 2 °C. Higher atmospheric concentrations cause more to dissolve in the oceans, which is making them more acidic. Because oxygen is less soluble in warmer water, its concentrations in the ocean are decreasing, and dead zones are expanding.
Tipping points and long-term impacts
Greater degrees of global warming increase the risk of passing through 'tipping points'—thresholds beyond which certain major impacts can no longer be avoided even if temperatures return to their previous state. For instance, the Greenland ice sheet is already melting, but if global warming reaches levels between 1.7 °C and 2.3 °C, its melting will continue until it fully disappears. If the warming is later reduced to 1.5 °C or less, it will still lose a lot more ice than if the warming was never allowed to reach the threshold in the first place. While the ice sheets would melt over millennia, other tipping points would occur faster and give societies less time to respond. The collapse of major ocean currents like the Atlantic meridional overturning circulation (AMOC), and irreversible damage to key ecosystems like the Amazon rainforest and coral reefs can unfold in a matter of decades.
The long-term effects of climate change on oceans include further ice melt, ocean warming, sea level rise, ocean acidification and ocean deoxygenation. The timescale of long-term impacts are centuries to millennia due to 's long atmospheric lifetime. The result is an estimated total sea level rise of after 2000 years. Oceanic uptake is slow enough that ocean acidification will also continue for hundreds to thousands of years. Deep oceans (below ) are also already committed to losing over 10% of their dissolved oxygen by the warming which occurred to date. Further, the West Antarctic ice sheet appears committed to practically irreversible melting, which would increase the sea levels by at least over approximately 2000 years.
Nature and wildlife
Recent warming has driven many terrestrial and freshwater species poleward and towards higher altitudes. For instance, the range of hundreds of North American birds has shifted northward at an average rate of 1.5 km/year over the past 55 years. Higher atmospheric levels and an extended growing season have resulted in global greening. However, heatwaves and drought have reduced ecosystem productivity in some regions. The future balance of these opposing effects is unclear. A related phenomenon driven by climate change is woody plant encroachment, affecting up to 500 million hectares globally. Climate change has contributed to the expansion of drier climate zones, such as the expansion of deserts in the subtropics. The size and speed of global warming is making abrupt changes in ecosystems more likely. Overall, it is expected that climate change will result in the extinction of many species.
The oceans have heated more slowly than the land, but plants and animals in the ocean have migrated towards the colder poles faster than species on land. Just as on land, heat waves in the ocean occur more frequently due to climate change, harming a wide range of organisms such as corals, kelp, and seabirds. Ocean acidification makes it harder for marine calcifying organisms such as mussels, barnacles and corals to produce shells and skeletons; and heatwaves have bleached coral reefs. Harmful algal blooms enhanced by climate change and eutrophication lower oxygen levels, disrupt food webs and cause great loss of marine life. Coastal ecosystems are under particular stress. Almost half of global wetlands have disappeared due to climate change and other human impacts. Plants have come under increased stress from damage by insects.
Humans
The effects of climate change are impacting humans everywhere in the world. Impacts can be observed on all continents and ocean regions, with low-latitude, less developed areas facing the greatest risk. Continued warming has potentially "severe, pervasive and irreversible impacts" for people and ecosystems. The risks are unevenly distributed, but are generally greater for disadvantaged people in developing and developed countries.
Health and food
The World Health Organization calls climate change one of the biggest threats to global health in the 21st century. Scientists have warned about the irreversible harms it poses. Extreme weather events affect public health, and food and water security. Temperature extremes lead to increased illness and death. Climate change increases the intensity and frequency of extreme weather events. It can affect transmission of infectious diseases, such as dengue fever and malaria. According to the World Economic Forum, 14.5 million more deaths are expected due to climate change by 2050. 30% of the global population currently live in areas where extreme heat and humidity are already associated with excess deaths. By 2100, 50% to 75% of the global population would live in such areas.
While total crop yields have been increasing in the past 50 years due to agricultural improvements, climate change has already decreased the rate of yield growth. Fisheries have been negatively affected in multiple regions. While agricultural productivity has been positively affected in some high latitude areas, mid- and low-latitude areas have been negatively affected. According to the World Economic Forum, an increase in drought in certain regions could cause 3.2 million deaths from malnutrition by 2050 and stunting in children. With 2 °C warming, global livestock headcounts could decline by 7–10% by 2050, as less animal feed will be available. If the emissions continue to increase for the rest of century, then over 9 million climate-related deaths would occur annually by 2100.
Livelihoods and inequality
Economic damages due to climate change may be severe and there is a chance of disastrous consequences. Severe impacts are expected in South-East Asia and sub-Saharan Africa, where most of the local inhabitants are dependent upon natural and agricultural resources. Heat stress can prevent outdoor labourers from working. If warming reaches 4 °C then labour capacity in those regions could be reduced by 30 to 50%. The World Bank estimates that between 2016 and 2030, climate change could drive over 120 million people into extreme poverty without adaptation.
Inequalities based on wealth and social status have worsened due to climate change. Major difficulties in mitigating, adapting to, and recovering from climate shocks are faced by marginalized people who have less control over resources. Indigenous people, who are subsistent on their land and ecosystems, will face endangerment to their wellness and lifestyles due to climate change. An expert elicitation concluded that the role of climate change in armed conflict has been small compared to factors such as socio-economic inequality and state capabilities.
While women are not inherently more at risk from climate change and shocks, limits on women's resources and discriminatory gender norms constrain their adaptive capacity and resilience. For example, women's work burdens, including hours worked in agriculture, tend to decline less than men's during climate shocks such as heat stress.
Climate migration
Low-lying islands and coastal communities are threatened by sea level rise, which makes urban flooding more common. Sometimes, land is permanently lost to the sea. This could lead to statelessness for people in island nations, such as the Maldives and Tuvalu. In some regions, the rise in temperature and humidity may be too severe for humans to adapt to. With worst-case climate change, models project that almost one-third of humanity might live in Sahara-like uninhabitable and extremely hot climates.
These factors can drive climate or environmental migration, within and between countries. More people are expected to be displaced because of sea level rise, extreme weather and conflict from increased competition over natural resources. Climate change may also increase vulnerability, leading to "trapped populations" who are not able to move due to a lack of resources.
Reducing and recapturing emissions
Climate change can be mitigated by reducing the rate at which greenhouse gases are emitted into the atmosphere, and by increasing the rate at which carbon dioxide is removed from the atmosphere. To limit global warming to less than 1.5 °C global greenhouse gas emissions needs to be net-zero by 2050, or by 2070 with a 2 °C target. This requires far-reaching, systemic changes on an unprecedented scale in energy, land, cities, transport, buildings, and industry.
The United Nations Environment Programme estimates that countries need to triple their pledges under the Paris Agreement within the next decade to limit global warming to 2 °C. An even greater level of reduction is required to meet the 1.5 °C goal. With pledges made under the Paris Agreement as of 2024, there would be a 66% chance that global warming is kept under 2.8 °C by the end of the century (range: 1.9–3.7 °C, depending on exact implementation and technological progress). When only considering current policies, this raises to 3.1 °C. Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs.
Although there is no single pathway to limit global warming to 1.5 or 2 °C, most scenarios and strategies see a major increase in the use of renewable energy in combination with increased energy efficiency measures to generate the needed greenhouse gas reductions. To reduce pressures on ecosystems and enhance their carbon sequestration capabilities, changes would also be necessary in agriculture and forestry, such as preventing deforestation and restoring natural ecosystems by reforestation.
Other approaches to mitigating climate change have a higher level of risk. Scenarios that limit global warming to 1.5 °C typically project the large-scale use of carbon dioxide removal methods over the 21st century. There are concerns, though, about over-reliance on these technologies, and environmental impacts. Solar radiation modification (SRM) is under discussion as a possible supplement to reductions in emissions. However, SRM raises significant ethical and global governance concerns, and its risks are not well understood.
Clean energy
Renewable energy is key to limiting climate change. For decades, fossil fuels have accounted for roughly 80% of the world's energy use. The remaining share has been split between nuclear power and renewables (including hydropower, bioenergy, wind and solar power and geothermal energy). Fossil fuel use is expected to peak in absolute terms prior to 2030 and then to decline, with coal use experiencing the sharpest reductions. Renewables represented 86% of all new electricity generation installed in 2023. Other forms of clean energy, such as nuclear and hydropower, currently have a larger share of the energy supply. However, their future growth forecasts appear limited in comparison.
While solar panels and onshore wind are now among the cheapest forms of adding new power generation capacity in many locations, green energy policies are needed to achieve a rapid transition from fossil fuels to renewables. To achieve carbon neutrality by 2050, renewable energy would become the dominant form of electricity generation, rising to 85% or more by 2050 in some scenarios. Investment in coal would be eliminated and coal use nearly phased out by 2050.
Electricity generated from renewable sources would also need to become the main energy source for heating and transport. Transport can switch away from internal combustion engine vehicles and towards electric vehicles, public transit, and active transport (cycling and walking). For shipping and flying, low-carbon fuels would reduce emissions. Heating could be increasingly decarbonized with technologies like heat pumps.
There are obstacles to the continued rapid growth of clean energy, including renewables. Wind and solar produce energy intermittently and with seasonal variability. Traditionally, hydro dams with reservoirs and fossil fuel power plants have been used when variable energy production is low. Going forward, battery storage can be expanded, energy demand and supply can be matched, and long-distance transmission can smooth variability of renewable outputs. Bioenergy is often not carbon-neutral and may have negative consequences for food security. The growth of nuclear power is constrained by controversy around radioactive waste, nuclear weapon proliferation, and accidents. Hydropower growth is limited by the fact that the best sites have been developed, and new projects are confronting increased social and environmental concerns.
Low-carbon energy improves human health by minimizing climate change as well as reducing air pollution deaths, which were estimated at 7 million annually in 2016. Meeting the Paris Agreement goals that limit warming to a 2 °C increase could save about a million of those lives per year by 2050, whereas limiting global warming to 1.5 °C could save millions and simultaneously increase energy security and reduce poverty. Improving air quality also has economic benefits which may be larger than mitigation costs.
Energy conservation
Reducing energy demand is another major aspect of reducing emissions. If less energy is needed, there is more flexibility for clean energy development. It also makes it easier to manage the electricity grid, and minimizes carbon-intensive infrastructure development. Major increases in energy efficiency investment will be required to achieve climate goals, comparable to the level of investment in renewable energy. Several COVID-19 related changes in energy use patterns, energy efficiency investments, and funding have made forecasts for this decade more difficult and uncertain.
Strategies to reduce energy demand vary by sector. In the transport sector, passengers and freight can switch to more efficient travel modes, such as buses and trains, or use electric vehicles. Industrial strategies to reduce energy demand include improving heating systems and motors, designing less energy-intensive products, and increasing product lifetimes. In the building sector the focus is on better design of new buildings, and higher levels of energy efficiency in retrofitting. The use of technologies like heat pumps can also increase building energy efficiency.
Agriculture and industry
Agriculture and forestry face a triple challenge of limiting greenhouse gas emissions, preventing the further conversion of forests to agricultural land, and meeting increases in world food demand. A set of actions could reduce agriculture and forestry-based emissions by two-thirds from 2010 levels. These include reducing growth in demand for food and other agricultural products, increasing land productivity, protecting and restoring forests, and reducing greenhouse gas emissions from agricultural production.
On the demand side, a key component of reducing emissions is shifting people towards plant-based diets. Eliminating the production of livestock for meat and dairy would eliminate about 3/4ths of all emissions from agriculture and other land use. Livestock also occupy 37% of ice-free land area on Earth and consume feed from the 12% of land area used for crops, driving deforestation and land degradation.
Steel and cement production are responsible for about 13% of industrial emissions. In these industries, carbon-intensive materials such as coke and lime play an integral role in the production, so that reducing emissions requires research into alternative chemistries. Where energy production or -intensive heavy industries continue to produce waste , technology can sometimes be used to capture and store most of the gas instead of releasing it to the atmosphere. This technology, carbon capture and storage (CCS), could have a critical but limited role in reducing emissions. It is relatively expensive and has been deployed only to an extent that removes around 0.1% of annual greenhouse gas emissions.
Carbon dioxide removal
Natural carbon sinks can be enhanced to sequester significantly larger amounts of beyond naturally occurring levels. Reforestation and afforestation (planting forests where there were none before) are among the most mature sequestration techniques, although the latter raises food security concerns. Farmers can promote sequestration of carbon in soils through practices such as use of winter cover crops, reducing the intensity and frequency of tillage, and using compost and manure as soil amendments. Forest and landscape restoration yields many benefits for the climate, including greenhouse gas emissions sequestration and reduction. Restoration/recreation of coastal wetlands, prairie plots and seagrass meadows increases the uptake of carbon into organic matter. When carbon is sequestered in soils and in organic matter such as trees, there is a risk of the carbon being re-released into the atmosphere later through changes in land use, fire, or other changes in ecosystems.
The use of bioenergy in conjunction with carbon capture and storage (BECCS) can result in net negative emissions as is drawn from the atmosphere. It remains highly uncertain whether carbon dioxide removal techniques will be able to play a large role in limiting warming to 1.5 °C. Policy decisions that rely on carbon dioxide removal increase the risk of global warming rising beyond international goals.
Adaptation
Adaptation is "the process of adjustment to current or expected changes in climate and its effects". Without additional mitigation, adaptation cannot avert the risk of "severe, widespread and irreversible" impacts. More severe climate change requires more transformative adaptation, which can be prohibitively expensive. The capacity and potential for humans to adapt is unevenly distributed across different regions and populations, and developing countries generally have less. The first two decades of the 21st century saw an increase in adaptive capacity in most low- and middle-income countries with improved access to basic sanitation and electricity, but progress is slow. Many countries have implemented adaptation policies. However, there is a considerable gap between necessary and available finance.
Adaptation to sea level rise consists of avoiding at-risk areas, learning to live with increased flooding, and building flood controls. If that fails, managed retreat may be needed. There are economic barriers for tackling dangerous heat impact. Avoiding strenuous work or having air conditioning is not possible for everybody. In agriculture, adaptation options include a switch to more sustainable diets, diversification, erosion control, and genetic improvements for increased tolerance to a changing climate. Insurance allows for risk-sharing, but is often difficult to get for people on lower incomes. Education, migration and early warning systems can reduce climate vulnerability. Planting mangroves or encouraging other coastal vegetation can buffer storms.
Ecosystems adapt to climate change, a process that can be supported by human intervention. By increasing connectivity between ecosystems, species can migrate to more favourable climate conditions. Species can also be introduced to areas acquiring a favourable climate. Protection and restoration of natural and semi-natural areas helps build resilience, making it easier for ecosystems to adapt. Many of the actions that promote adaptation in ecosystems, also help humans adapt via ecosystem-based adaptation. For instance, restoration of natural fire regimes makes catastrophic fires less likely, and reduces human exposure. Giving rivers more space allows for more water storage in the natural system, reducing flood risk. Restored forest acts as a carbon sink, but planting trees in unsuitable regions can exacerbate climate impacts.
There are synergies but also trade-offs between adaptation and mitigation. An example for synergy is increased food productivity, which has large benefits for both adaptation and mitigation. An example of a trade-off is that increased use of air conditioning allows people to better cope with heat, but increases energy demand. Another trade-off example is that more compact urban development may reduce emissions from transport and construction, but may also increase the urban heat island effect, exposing people to heat-related health risks.
Policies and politics
Countries that are most vulnerable to climate change have typically been responsible for a small share of global emissions. This raises questions about justice and fairness. Limiting global warming makes it much easier to achieve the UN's Sustainable Development Goals, such as eradicating poverty and reducing inequalities. The connection is recognized in Sustainable Development Goal 13 which is to "take urgent action to combat climate change and its impacts". The goals on food, clean water and ecosystem protection have synergies with climate mitigation.
The geopolitics of climate change is complex. It has often been framed as a free-rider problem, in which all countries benefit from mitigation done by other countries, but individual countries would lose from switching to a low-carbon economy themselves. Sometimes mitigation also has localized benefits though. For instance, the benefits of a coal phase-out to public health and local environments exceed the costs in almost all regions. Furthermore, net importers of fossil fuels win economically from switching to clean energy, causing net exporters to face stranded assets: fossil fuels they cannot sell.
Policy options
A wide range of policies, regulations, and laws are being used to reduce emissions. As of 2019, carbon pricing covers about 20% of global greenhouse gas emissions. Carbon can be priced with carbon taxes and emissions trading systems. Direct global fossil fuel subsidies reached $319 billion in 2017, and $5.2 trillion when indirect costs such as air pollution are priced in. Ending these can cause a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Money saved on fossil subsidies could be used to support the transition to clean energy instead. More direct methods to reduce greenhouse gases include vehicle efficiency standards, renewable fuel standards, and air pollution regulations on heavy industry. Several countries require utilities to increase the share of renewables in power production.
Climate justice
Policy designed through the lens of climate justice tries to address human rights issues and social inequality. According to proponents of climate justice, the costs of climate adaptation should be paid by those most responsible for climate change, while the beneficiaries of payments should be those suffering impacts. One way this can be addressed in practice is to have wealthy nations pay poorer countries to adapt.
Oxfam found that in 2023 the wealthiest 10% of people were responsible for 50% of global emissions, while the bottom 50% were responsible for just 8%. Production of emissions is another way to look at responsibility: under that approach, the top 21 fossil fuel companies would owe cumulative climate reparations of $5.4 trillion over the period 2025–2050. To achieve a just transition, people working in the fossil fuel sector would also need other jobs, and their communities would need investments.
International climate agreements
Nearly all countries in the world are parties to the 1994 United Nations Framework Convention on Climate Change (UNFCCC). The goal of the UNFCCC is to prevent dangerous human interference with the climate system. As stated in the convention, this requires that greenhouse gas concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can be sustained. The UNFCCC does not itself restrict emissions but rather provides a framework for protocols that do. Global emissions have risen since the UNFCCC was signed. Its yearly conferences are the stage of global negotiations.
The 1997 Kyoto Protocol extended the UNFCCC and included legally binding commitments for most developed countries to limit their emissions. During the negotiations, the G77 (representing developing countries) pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions, since developed countries contributed most to the accumulation of greenhouse gases in the atmosphere. Per-capita emissions were also still relatively low in developing countries and developing countries would need to emit more to meet their development needs.
The 2009 Copenhagen Accord has been widely portrayed as disappointing because of its low goals, and was rejected by poorer nations including the G77. Associated parties aimed to limit the global temperature rise to below 2 °C. The Accord set the goal of sending $100 billion per year to developing countries for mitigation and adaptation by 2020, and proposed the founding of the Green Climate Fund. , only 83.3 billion were delivered. Only in 2023 the target is expected to be achieved.
In 2015 all UN countries negotiated the Paris Agreement, which aims to keep global warming well below 2.0 °C and contains an aspirational goal of keeping warming under . The agreement replaced the Kyoto Protocol. Unlike Kyoto, no binding emission targets were set in the Paris Agreement. Instead, a set of procedures was made binding. Countries have to regularly set ever more ambitious goals and reevaluate these goals every five years. The Paris Agreement restated that developing countries must be financially supported. , 194 states and the European Union have signed the treaty and 191 states and the EU have ratified or acceded to the agreement.
The 1987 Montreal Protocol, an international agreement to phase out production of ozone-depleting gases, has had benefits for climate change mitigation. Several ozone-depleting gases like chlorofluorocarbons are powerful greenhouse gases, so banning their production and usage may have avoided a temperature rise of 0.5 °C–1.0 °C, as well as additional warming by preventing damage to vegetation from ultraviolet radiation. It is estimated that the agreement has been more effective at curbing greenhouse gas emissions than the Kyoto Protocol specifically designed to do so. The most recent amendment to the Montreal Protocol, the 2016 Kigali Amendment, committed to reducing the emissions of hydrofluorocarbons, which served as a replacement for banned ozone-depleting gases and are also potent greenhouse gases. Should countries comply with the amendment, a warming of 0.3 °C–0.5 °C is estimated to be avoided.
National responses
In 2019, the United Kingdom parliament became the first national government to declare a climate emergency. Other countries and jurisdictions followed suit. That same year, the European Parliament declared a "climate and environmental emergency". The European Commission presented its European Green Deal with the goal of making the EU carbon-neutral by 2050. In 2021, the European Commission released its "Fit for 55" legislation package, which contains guidelines for the car industry; all new cars on the European market must be zero-emission vehicles from 2035.
Major countries in Asia have made similar pledges: South Korea and Japan have committed to become carbon-neutral by 2050, and China by 2060. While India has strong incentives for renewables, it also plans a significant expansion of coal in the country. Vietnam is among very few coal-dependent, fast-developing countries that pledged to phase out unabated coal power by the 2040s or as soon as possible thereafter.
As of 2021, based on information from 48 national climate plans, which represent 40% of the parties to the Paris Agreement, estimated total greenhouse gas emissions will be 0.5% lower compared to 2010 levels, below the 45% or 25% reduction goals to limit global warming to 1.5 °C or 2 °C, respectively.
Society
Denial and misinformation
Public debate about climate change has been strongly affected by climate change denial and misinformation, which originated in the United States and has since spread to other countries, particularly Canada and Australia. Climate change denial has originated from fossil fuel companies, industry groups, conservative think tanks, and contrarian scientists. Like the tobacco industry, the main strategy of these groups has been to manufacture doubt about climate-change related scientific data and results. People who hold unwarranted doubt about climate change are called climate change "skeptics", although "contrarians" or "deniers" are more appropriate terms.
There are different variants of climate denial: some deny that warming takes place at all, some acknowledge warming but attribute it to natural influences, and some minimize the negative impacts of climate change. Manufacturing uncertainty about the science later developed into a manufactured controversy: creating the belief that there is significant uncertainty about climate change within the scientific community to delay policy changes. Strategies to promote these ideas include criticism of scientific institutions, and questioning the motives of individual scientists. An echo chamber of climate-denying blogs and media has further fomented misunderstanding of climate change.
Public awareness and opinion
Climate change came to international public attention in the late 1980s. Due to media coverage in the early 1990s, people often confused climate change with other environmental issues like ozone depletion. In popular culture, the climate fiction movie The Day After Tomorrow (2004) and the Al Gore documentary An Inconvenient Truth (2006) focused on climate change.
Significant regional, gender, age and political differences exist in both public concern for, and understanding of, climate change. More highly educated people, and in some countries, women and younger people, were more likely to see climate change as a serious threat. College biology textbooks from the 2010s featured less content on climate change compared to those from the preceding decade, with decreasing emphasis on solutions. Partisan gaps also exist in many countries, and countries with high CO2 emissions tend to be less concerned. Views on causes of climate change vary widely between countries. Concern has increased over time, and a majority of citizens in many countries now express a high level of worry about climate change, or view it as a global emergency. Higher levels of worry are associated with stronger public support for policies that address climate change.
Climate movement
Climate protests demand that political leaders take action to prevent climate change. They can take the form of public demonstrations, fossil fuel divestment, lawsuits and other activities. Prominent demonstrations include the School Strike for Climate. In this initiative, young people across the globe have been protesting since 2018 by skipping school on Fridays, inspired by Swedish activist and then-teenager Greta Thunberg. Mass civil disobedience actions by groups like Extinction Rebellion have protested by disrupting roads and public transport.
Litigation is increasingly used as a tool to strengthen climate action from public institutions and companies. Activists also initiate lawsuits which target governments and demand that they take ambitious action or enforce existing laws on climate change. Lawsuits against fossil-fuel companies generally seek compensation for loss and damage.
History
Early discoveries
Scientists in the 19th century such as Alexander von Humboldt began to foresee the effects of climate change. In the 1820s, Joseph Fourier proposed the greenhouse effect to explain why Earth's temperature was higher than the Sun's energy alone could explain. Earth's atmosphere is transparent to sunlight, so sunlight reaches the surface where it is converted to heat. However, the atmosphere is not transparent to heat radiating from the surface, and captures some of that heat, which in turn warms the planet.
In 1856 Eunice Newton Foote demonstrated that the warming effect of the Sun is greater for air with water vapour than for dry air, and that the effect is even greater with carbon dioxide (). She concluded that "An atmosphere of that gas would give to our earth a high temperature..."
Starting in 1859, John Tyndall established that nitrogen and oxygen—together totalling 99% of dry air—are transparent to radiated heat. However, water vapour and gases such as methane and carbon dioxide absorb radiated heat and re-radiate that heat into the atmosphere. Tyndall proposed that changes in the concentrations of these gases may have caused climatic changes in the past, including ice ages.
Svante Arrhenius noted that water vapour in air continuously varied, but the concentration in air was influenced by long-term geological processes. Warming from increased levels would increase the amount of water vapour, amplifying warming in a positive feedback loop. In 1896, he published the first climate model of its kind, projecting that halving levels could have produced a drop in temperature initiating an ice age. Arrhenius calculated the temperature increase expected from doubling to be around 5–6 °C. Other scientists were initially sceptical and believed that the greenhouse effect was saturated so that adding more would make no difference, and that the climate would be self-regulating. Beginning in 1938, Guy Stewart Callendar published evidence that climate was warming and levels were rising, but his calculations met the same objections.
Development of a scientific consensus
In the 1950s, Gilbert Plass created a detailed computer model that included different atmospheric layers and the infrared spectrum. This model predicted that increasing levels would cause warming. Around the same time, Hans Suess found evidence that levels had been rising, and Roger Revelle showed that the oceans would not absorb the increase. The two scientists subsequently helped Charles Keeling to begin a record of continued increase, which has been termed the "Keeling Curve". Scientists alerted the public, and the dangers were highlighted at James Hansen's 1988 Congressional testimony. The Intergovernmental Panel on Climate Change (IPCC), set up in 1988 to provide formal advice to the world's governments, spurred interdisciplinary research. As part of the IPCC reports, scientists assess the scientific discussion that takes place in peer-reviewed journal articles.
There is a near-complete scientific consensus that the climate is warming and that this is caused by human activities. As of 2019, agreement in recent literature reached over 99%. No scientific body of national or international standing disagrees with this view. Consensus has further developed that some form of action should be taken to protect people against the impacts of climate change. National science academies have called on world leaders to cut global emissions. The 2021 IPCC Assessment Report stated that it is "unequivocal" that climate change is caused by humans.
| Physical sciences | Earth science | null |
6631661 | https://en.wikipedia.org/wiki/Transportation%20theory%20%28mathematics%29 | Transportation theory (mathematics) | In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.
In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically. In 1930, in the collection Transportation Planning Volume I for the National Commissariat of Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal Kilometrage in Cargo-transportation in space".
Major advances were made in the field during World War II by the Soviet mathematician and economist Leonid Kantorovich. Consequently, the problem as it is stated is sometimes known as the Monge–Kantorovich transportation problem. The linear programming formulation of the transportation problem is also known as the Hitchcock–Koopmans transportation problem.
Motivation
Mines and factories
Suppose that we have a collection of mines mining iron ore, and a collection of factories which use the iron ore that the mines produce. Suppose for the sake of argument that these mines and factories form two disjoint subsets and of the Euclidean plane . Suppose also that we have a cost function , so that is the cost of transporting one shipment of iron from to . For simplicity, we ignore the time taken to do the transporting. We also assume that each mine can supply only one factory (no splitting of shipments) and that each factory requires precisely one shipment to be in operation (factories cannot work at half- or double-capacity). Having made the above assumptions, a transport plan is a bijection .
In other words, each mine supplies precisely one target factory and each factory is supplied by precisely one mine.
We wish to find the optimal transport plan, the plan whose total cost
is the least of all possible transport plans from to . This motivating special case of the transportation problem is an instance of the assignment problem.
More specifically, it is equivalent to finding a minimum weight matching in a bipartite graph.
Moving books: the importance of the cost function
The following simple example illustrates the importance of the cost function in determining the optimal transport plan. Suppose that we have books of equal width on a shelf (the real line), arranged in a single contiguous block. We wish to rearrange them into another contiguous block, but shifted one book-width to the right. Two obvious candidates for the optimal transport plan present themselves:
move all books one book-width to the right ("many small moves");
move the left-most book book-widths to the right and leave all other books fixed ("one big move").
If the cost function is proportional to Euclidean distance ( for some ) then these two candidates are both optimal. If, on the other hand, we choose the strictly convex cost function proportional to the square of Euclidean distance ( for some ), then the "many small moves" option becomes the unique minimizer.
Note that the above cost functions consider only the horizontal distance traveled by the books, not the horizontal distance traveled by a device used to pick each book up and move the book into position. If the latter is considered instead, then, of the two transport plans, the second is always optimal for the Euclidean distance, while, provided there are at least 3 books, the first transport plan is optimal for the squared Euclidean distance.
Hitchcock problem
The following transportation problem formulation is credited to F. L. Hitchcock:
Suppose there are sources for a commodity, with units of supply at and sinks for the commodity, with the demand at . If is the unit cost of shipment from to , find a flow that satisfies demand from supplies and minimizes the flow cost. This challenge in logistics was taken up by D. R. Fulkerson and in the book Flows in Networks (1962) written with L. R. Ford Jr.
Tjalling Koopmans is also credited with formulations of transport economics and allocation of resources.
Abstract formulation of the problem
Monge and Kantorovich formulations
The transportation problem as it is stated in modern or more technical literature looks somewhat different because of the development of Riemannian geometry and measure theory. The mines-factories example, simple as it is, is a useful reference point when thinking of the abstract case. In this setting, we allow the possibility that we may not wish to keep all mines and factories open for business, and allow mines to supply more than one factory, and factories to accept iron from more than one mine.
Let and be two separable metric spaces such that any probability measure on (or ) is a Radon measure (i.e. they are Radon spaces). Let be a Borel-measurable function. Given probability measures on and on , Monge's formulation of the optimal transportation problem is to find a transport map that realizes the infimum
where denotes the push forward of by . A map that attains this infimum (i.e. makes it a minimum instead of an infimum) is called an "optimal transport map".
Monge's formulation of the optimal transportation problem can be ill-posed, because sometimes there is no satisfying : this happens, for example, when is a Dirac measure but is not.
We can improve on this by adopting Kantorovich's formulation of the optimal transportation problem, which is to find a probability measure on that attains the infimum
where denotes the collection of all probability measures on with marginals on and on . It can be shown that a minimizer for this problem always exists when the cost function is lower semi-continuous and is a tight collection of measures (which is guaranteed for Radon spaces and ). (Compare this formulation with the definition of the Wasserstein metric on the space of probability measures.) A gradient descent formulation for the solution of the Monge–Kantorovich problem was given by Sigurd Angenent, Steven Haker, and Allen Tannenbaum.
Duality formula
The minimum of the Kantorovich problem is equal to
where the supremum runs over all pairs of bounded and continuous functions and such that
Economic interpretation
The economic interpretation is clearer if signs are flipped. Let stand for the vector of characteristics of a worker, for the vector of characteristics of a firm, and for the economic output generated by worker matched with firm . Setting and , the Monge–Kantorovich problem
rewrites:
which has dual:
where the infimum runs over bounded and continuous function and . If the dual problem has a solution, one can see that:
so that interprets as the equilibrium wage of a worker of type , and interprets as the equilibrium profit of a firm of type .
Solution of the problem
Optimal transportation on the real line
For , let denote the collection of probability measures on that have finite -th moment. Let and let , where is a convex function.
If has no atom, i.e., if the cumulative distribution function of is a continuous function, then is an optimal transport map. It is the unique optimal transport map if is strictly convex.
We have
The proof of this solution appears in Rachev & Rüschendorf (1998).
Discrete version and linear programming formulation
In the case where the margins and are discrete, let
and be the probability masses respectively assigned to and , and let be the probability of an assignment. The objective function in the primal Kantorovich problem is then
and the constraint expresses as
and
In order to input this in a linear programming problem, we need to vectorize the matrix by either stacking its columns or its rows, we call this operation. In the column-major order, the constraints above rewrite as
and
where is the Kronecker product, is a matrix of size with all entries of ones, and is the identity matrix of size . As a result, setting , the linear programming formulation of the problem is
which can be readily inputted in a large-scale linear programming solver (see chapter 3.4 of Galichon (2016)).
Semi-discrete case
In the semi-discrete case, and is a continuous distribution over , while is a discrete distribution which assigns probability mass to site . In this case, we can see that the primal and dual Kantorovich problems respectively boil down to:
for the primal, where means that and , and:
for the dual, which can be rewritten as:
which is a finite-dimensional convex optimization problem that can be solved by standard techniques, such as gradient descent.
In the case when , one can show that the set of assigned to a particular site is a convex polyhedron. The resulting configuration is called a power diagram.
Quadratic normal case
Assume the particular case , , and where is invertible. One then has
The proof of this solution appears in Galichon (2016).
Separable Hilbert spaces
Let be a separable Hilbert space. Let denote the collection of probability measures on that have finite -th moment; let denote those elements that are Gaussian regular: if is any strictly positive Gaussian measure on and , then also.
Let , , for . Then the Kantorovich problem has a unique solution , and this solution is induced by an optimal transport map: i.e., there exists a Borel map such that
Moreover, if has bounded support, then
for -almost all for some locally Lipschitz, -concave and maximal Kantorovich potential . (Here denotes the Gateaux derivative of .)
Entropic regularization
Consider a variant of the discrete problem above, where we have added an entropic regularization term to the objective function of the primal problem
One can show that the dual regularized problem is
where, compared with the unregularized version, the "hard" constraint in the former dual () has been replaced by a "soft" penalization of that constraint (the sum of the terms). The optimality conditions in the dual problem can be expressed as
Denoting as the matrix of term , solving the dual is therefore equivalent to looking for two diagonal positive matrices and of respective sizes and , such that and . The existence of such matrices generalizes Sinkhorn's theorem and the matrices can be computed using the Sinkhorn–Knopp algorithm, which simply consists of iteratively looking for to solve , and to solve . Sinkhorn–Knopp's algorithm is therefore a coordinate descent algorithm on the dual regularized problem.
Applications
The Monge–Kantorovich optimal transport has found applications in wide range in different fields. Among them are:
Image registration and warping
Reflector design
Retrieving information from shadowgraphy and proton radiography
Seismic tomography and reflection seismology
The broad class of economic modelling that involves gross substitutes property (among others, models of matching and discrete choice).
| Mathematics | Optimization | null |
882736 | https://en.wikipedia.org/wiki/Project%20Gemini | Project Gemini | Project Gemini () was the second United States human spaceflight program to fly. Conducted after the first American crewed space program, Project Mercury, while the Apollo program was still in early development, Gemini was conceived in 1961 and concluded in 1966. The Gemini spacecraft carried a two-astronaut crew. Ten Gemini crews and 16 individual astronauts flew low Earth orbit (LEO) missions during 1965 and 1966.
Gemini's objective was the development of space travel techniques to support the Apollo mission to land astronauts on the Moon. In doing so, it allowed the United States to catch up and overcome the lead in human spaceflight capability the Soviet Union had obtained in the early years of the Space Race, by demonstrating mission endurance up to just under 14 days, longer than the eight days required for a round trip to the Moon; methods of performing extravehicular activity (EVA) without tiring; and the orbital maneuvers necessary to achieve rendezvous and docking with another spacecraft. This left Apollo free to pursue its prime mission without spending time developing these techniques.
All Gemini flights were launched from Launch Complex 19 (LC-19) at Cape Kennedy Air Force Station in Florida. Their launch vehicle was the Titan II GLV, a modified intercontinental ballistic missile. Gemini was the first program to use the newly built Mission Control Center at the Houston Manned Spacecraft Center for flight control. The project also used the Agena target vehicle, a modified Atlas-Agena upper stage, used to develop and practice orbital rendezvous and docking techniques.
The astronaut corps that supported Project Gemini included the "Mercury Seven", "The New Nine", and "The Fourteen". During the program, three astronauts died in air crashes during training, including both members of the prime crew for Gemini 9. The backup crew flew this mission.
Gemini was robust enough that the United States Air Force planned to use it for the Manned Orbital Laboratory (MOL) program, which was later canceled. Gemini's chief designer, Jim Chamberlin, also made detailed plans for cislunar and lunar landing missions in late 1961. He believed Gemini spacecraft could fly in lunar operations before Project Apollo, and cost less. NASA's administration did not approve those plans. In 1969, Lukas Bingham proposed a "Big Gemini" that could have been used to shuttle up to 12 astronauts to the planned space stations in the Apollo Applications Project (AAP). The only AAP project funded was Skylab (The first American space station) – which used existing spacecraft and hardware – thereby eliminating the need for Big Gemini.
Pronunciation
The constellation for which the project was named is commonly pronounced , the last syllable rhyming with eye. However, staff of the Manned Spacecraft Center, including the astronauts, tended to pronounce the name , rhyming with knee. NASA's public affairs office then issued a statement in 1965 declaring "Jeh'-mih-nee" the "official" pronunciation. Gus Grissom, acting as Houston capsule communicator when Ed White performed his spacewalk on Gemini 4, is heard on flight recordings pronouncing the spacecraft's call sign "Jeh-mih-nee 4", and the NASA pronunciation is used in the 2018 film First Man.
Program origins and objectives
The Apollo program was conceived in early 1960 as a three-man spacecraft to follow Project Mercury. Jim Chamberlin, the head of engineering at the Space Task Group (STG), was assigned in February 1961 to start working on a bridge program between Mercury and Apollo. He presented two initial versions of a two-man spacecraft, then designated Mercury Mark II, at a NASA retreat at Wallops Island in March 1961. Scale models were shown in July 1961 at the McDonnell Aircraft Corporation's offices in St. Louis.
After Apollo was chartered to land men on the Moon by President John F. Kennedy on May 25, 1961, it became evident to NASA officials that a follow-on to the Mercury program was required to develop certain spaceflight capabilities in support of Apollo. NASA approved the two-man / two-vehicle program rechristened Project Gemini (Latin for "twins"), in reference to the third constellation of the Zodiac with its twin stars Castor and Pollux, on December 7, 1961. McDonnell Aircraft was contracted to build it on December 22, 1961. The program was publicly announced on January 3, 1962, with these major objectives:
To demonstrate endurance of humans and equipment in spaceflight for extended periods, at least eight days required for a Moon landing, to a maximum of two weeks
To effect rendezvous and docking with another vehicle, and to maneuver the combined spacecraft using the propulsion system of the target vehicle
To demonstrate Extra-Vehicular Activity (EVA), or space-"walks" outside the protection of the spacecraft, and to evaluate the astronauts' ability to perform tasks there
To perfect techniques of atmospheric reentry and touchdown at a pre-selected location on land
Team
Chamberlin designed the Gemini capsule, which carried a crew of two. He was previously the chief aerodynamicist on Avro Canada's CF-105 Arrow fighter interceptor program. Chamberlin joined NASA along with 25 senior Avro engineers after cancellation of the Canadian Arrow program, and became head of the U.S. Space Task Group's engineering division in charge of Gemini. The prime contractor was McDonnell Aircraft Corporation, which was also the prime contractor for the Project Mercury capsule.
Astronaut Gus Grissom was heavily involved in the development and design of the Gemini spacecraft. What other Mercury astronauts dubbed "Gusmobile" was so designed around Grissom's 5'6" body that, when NASA discovered in 1963 that 14 of 16 astronauts would not fit in the spacecraft, the interior had to be redesigned. Grissom wrote in his posthumous 1968 book Gemini! that the realization of Project Mercury's end and the unlikelihood of his having another flight in that program prompted him to focus all his efforts on the upcoming Gemini program.
The Gemini program was managed by the Manned Spacecraft Center, located in Houston, Texas, under direction of the Office of Manned Space Flight, NASA Headquarters, Washington, D.C. Dr. George E. Mueller, Associate Administrator of NASA for Manned Space Flight, served as acting director of the Gemini program. William C. Schneider, Deputy Director of Manned Space Flight for Mission Operations served as mission director on all Gemini flights beginning with Gemini 6A.
Guenter Wendt was a McDonnell engineer who supervised launch preparations for both the Mercury and Gemini programs and would go on to do the same when the Apollo program launched crews. His team was responsible for completion of the complex pad close-out procedures just prior to spacecraft launch, and he was the last person the astronauts would see prior to closing the hatch. The astronauts appreciated his taking absolute authority over, and responsibility for, the condition of the spacecraft and developed a good-humored rapport with him.
Spacecraft
NASA selected McDonnell Aircraft, which had been the prime contractor for the Project Mercury capsule, in 1961 to build the Gemini capsule, the first of which was delivered in 1963. The spacecraft was long and wide, with a launch weight varying from .
The Gemini crew capsule (referred to as the Reentry Module) was essentially an enlarged version of the Mercury capsule. Unlike Mercury, the retrorockets, electrical power, propulsion systems, oxygen, and water were located in a detachable Adapter Module behind the Reentry Module which would burn up on reentry. A major design improvement in Gemini was to locate all internal spacecraft systems in modular components, which could be independently tested and replaced when necessary, without removing or disturbing other already tested components.
Reentry module
Many components in the capsule itself were reachable through their own small access doors. Unlike Mercury, Gemini used completely solid-state electronics, and its modular design made it easy to repair.
Gemini's emergency launch escape system did not use an escape tower powered by a solid-fuel rocket, but instead used aircraft-style ejection seats. The tower was heavy and complicated, and NASA engineers reasoned that they could do away with it as the Titan II's hypergolic propellants would burn immediately on contact. A Titan II booster explosion had a smaller blast effect and flame than on the cryogenically fueled Atlas and Saturn. Ejection seats were sufficient to separate the astronauts from a malfunctioning launch vehicle. At higher altitudes, where the ejection seats could not be used, the astronauts would return to Earth inside the spacecraft, which would separate from the launch vehicle.
The main proponent of using ejection seats was Chamberlin, who had never liked the Mercury escape tower and wished to use a simpler alternative that would also reduce weight. He reviewed several films of Atlas and Titan II ICBM failures, which he used to estimate the approximate size of a fireball produced by an exploding launch vehicle and from this he gauged that the Titan II would produce a much smaller explosion, thus the spacecraft could get away with ejection seats.
Maxime Faget, the designer of the Mercury LES, was on the other hand less-than-enthusiastic about this setup. Aside from the possibility of the ejection seats seriously injuring the astronauts, they would also only be usable for about 40 seconds after liftoff, by which point the booster would be attaining Mach 1 speed and ejection would no longer be possible. He was also concerned about the astronauts being launched through the Titan's exhaust plume if they ejected in-flight and later added, "The best thing about Gemini was that they never had to make an escape."
The Gemini ejection system was never tested with the Gemini cabin pressurized with pure oxygen, as it was prior to launch. In January 1967, the fatal Apollo 1 fire demonstrated that pressurizing a spacecraft with pure oxygen created an extremely dangerous fire hazard. In a 1997 oral history, astronaut Thomas P. Stafford commented on the Gemini 6 launch abort in December 1965, when he and command pilot Wally Schirra nearly ejected from the spacecraft:
Gemini was the first astronaut-carrying spacecraft to include an onboard computer, the Gemini Guidance Computer, to facilitate management and control of mission maneuvers. This computer, sometimes called the Gemini Spacecraft On-Board Computer (OBC), was very similar to the Saturn Launch Vehicle Digital Computer. The Gemini Guidance Computer weighed . Its core memory had 4096 addresses, each containing a 39-bit word composed of three 13-bit "syllables". All numeric data was 26-bit two's-complement integers (sometimes used as fixed-point numbers), either stored in the first two syllables of a word or in the accumulator. Instructions (always with a 4-bit opcode and 9 bits of operand) could go in any syllable.
Unlike Mercury, Gemini used in-flight radar and an artificial horizon, similar to those used in the aviation industry. Like Mercury, Gemini used a joystick to give the astronauts manual control of yaw, pitch, and roll. Gemini added control of the spacecraft's translation (forward, backward, up, down, and sideways) with a pair of T-shaped handles (one for each crew member). Translation control enabled rendezvous and docking, and crew control of the flight path. The same controller types were also used in the Apollo spacecraft.
The original intention for Gemini was to land on solid ground instead of at sea, using a Rogallo wing rather than a parachute, with the crew seated upright controlling the forward motion of the craft. To facilitate this, the airfoil did not attach just to the nose of the craft, but to an additional attachment point for balance near the heat shield. This cord was covered by a strip of metal which ran between the twin hatches. This design was ultimately dropped, and parachutes were used to make a sea landing as in Mercury. The capsule was suspended at an angle closer to horizontal, so that a side of the heat shield contacted the water first. This eliminated the need for the landing bag cushion used in the Mercury capsule.
Adapter module
The adapter module in turn was separated into a Retro module and an Equipment module.
Retro module
The Retro module contained four solid-fuel TE-M-385 Star-13E retrorockets, each spherical in shape except for its rocket nozzle, which were structurally attached to two beams that reached across the diameter of the retro module, crossing at right angles in the center. Re-entry began with the retrorockets firing one at a time. Abort procedures at certain periods during lift-off would cause them to fire at the same time, thrusting the Descent module away from the Titan rocket.
Equipment module
Gemini was equipped with an Orbit Attitude and Maneuvering System (OAMS), containing sixteen thrusters for translation control in all three perpendicular axes (forward/backward, left/right, up/down), in addition to attitude control (pitch, yaw, and roll angle orientation) as in Mercury. Translation control allowed changing orbital inclination and altitude, necessary to perform space rendezvous with other craft, and docking with the Agena Target Vehicle (ATV), with its own rocket engine which could be used to perform greater orbit changes.
Early short-duration missions had their electrical power supplied by batteries; later endurance missions used the first fuel cells in crewed spacecraft.
Gemini was in some regards more advanced than Apollo because the latter program began almost a year earlier. It became known as a "pilot's spacecraft" due to its assortment of jet fighter-like features, in no small part due to Gus Grissom's influence over the design, and it was at this point where the US crewed space program clearly began showing its superiority over that of the Soviet Union with long duration flight, rendezvous, and extravehicular capability. The Soviet Union during this period was developing the Soyuz spacecraft intended to take cosmonauts to the Moon, but political and technical problems began to get in the way, leading to the ultimate end of their crewed lunar program.
Launch vehicle
The Titan II debuted in 1962 as the Air Force's second-generation ICBM to replace the Atlas. By using hypergolic fuels, it could be stored longer and be easily readied for launch in addition to being a simpler design with fewer components. The only caveat was the propellant mix (nitrogen tetroxide and hydrazine) were extremely toxic compared to the Atlas' liquid oxygen/RP-1. However, the Titan had considerable difficulty being man-rated due to early problems with pogo oscillation. The launch vehicle used a radio guidance system that was unique to launches from Cape Kennedy.
Astronauts
Deke Slayton, as director of flight crew operations, had primary responsibility for assigning crews for the Gemini program. Each flight had a primary crew and backup crew, and the backup crew would rotate to primary crew status three flights later. Slayton intended for first choice of mission commands to be given to the four remaining active astronauts of the Mercury Seven: Alan Shepard, Grissom, Cooper, and Schirra. (John Glenn had retired from NASA in January 1964 and Scott Carpenter, who was blamed by some in NASA management for the problematic reentry of Aurora 7, was on leave to participate in the Navy's SEALAB project and was grounded from flight in July 1964 due to an arm injury sustained in a motorbike accident. Slayton himself continued to be grounded due to a heart problem.) As for Shepard, during training on the Gemini Project, his inner ear deficiency due to Menière's Disease would effectively ground him as well and keep him removed from the flight roster until he underwent corrective surgery and would not fly on Gemini at all, but return to flight with Apollo 14 as Commander.
Titles used for the left-hand (command) and right-hand (pilot) seat crew positions were taken from the U.S. Air Force pilot ratings, Command Pilot and Pilot. Sixteen astronauts flew on 10 crewed Gemini missions:
Crew selection
In late 1963, Slayton selected Shepard and Stafford for Gemini 3, McDivitt and White for Gemini 4, and Schirra and Young for Gemini 5 (which was to be the first Agena rendezvous mission). The backup crew for Gemini 3 was Grissom and Borman, who were also slated for Gemini 6, to be the first long-duration mission. Finally Conrad and Lovell were assigned as the backup crew for Gemini 4.
Delays in the production of the Agena Target Vehicle caused the first rearrangement of the crew rotation. The Schirra and Young mission was bumped to Gemini 6 and they became the backup crew for Shepard and Stafford. Grissom and Borman then had their long-duration mission assigned to Gemini 5.
The second rearrangement occurred when Shepard developed Ménière's disease, an inner ear problem. Grissom was then moved to command Gemini 3. Slayton felt that Young was a better personality match with Grissom and switched Stafford and Young. Finally, Slayton tapped Cooper to command the long-duration Gemini 5. Again for reasons of compatibility, he moved Conrad from backup commander of Gemini 4 to pilot of Gemini 5, and Borman to backup command of Gemini 4. Finally he assigned Armstrong and Elliot See to be the backup crew for Gemini 5.
The third rearrangement of crew assignment occurred when Slayton felt that See wasn't up to the physical demands of EVA on Gemini 8. He reassigned See to be the prime commander of Gemini 9 and put Scott as pilot of Gemini 8 and Charles Bassett as the pilot of Gemini 9.
The fourth and final rearrangement of the Gemini crew assignment occurred after the deaths of See and Bassett when their trainer jet crashed, coincidentally into a McDonnell building which held their Gemini 9 capsule in St. Louis. The backup crew of Stafford and Cernan was then moved up to the new prime crew of Gemini 9A. Lovell and Aldrin were moved from being the backup crew of Gemini 10 to be the backup crew of Gemini 9. This cleared the way through the crew rotation for Lovell and Aldrin to become the prime crew of Gemini 12.
Along with the deaths of Grissom, White, and Roger Chaffee in the fire of Apollo 1, this final arrangement helped determine the makeup of the first seven Apollo crews, and who would be in position for a chance to be the first to walk on the Moon.
Missions
In April 1964 and January 1965, two Gemini missions were flown without crews to test systems and the heat shield. These were followed by 10 flights with crews in 1965 and 1966. All were launched by Titan II launch vehicles. Some highlights from the Gemini program:
Gemini 3 (Grissom and Young) was the first crewed Gemini mission, first multi-crewed US mission, and the first crewed spacecraft to use thrusters to change its orbit.
On Gemini 4, Ed White became the first American to make an extravehicular activity (EVA, or "spacewalk") on June 3, 1965.
Gemini 5 (August 21–29, 1965) demonstrated the 8-day endurance necessary for an Apollo lunar mission with the first use of fuel cells to generate its electrical power.
Gemini 6A accomplished the first space rendezvous with its sister craft Gemini 7 in December 1965, with Gemini 7 setting a 14-day endurance record for its flight.
Gemini 8 achieved the first space docking with an uncrewed Agena target vehicle.
Gemini 10 established that radiation at high altitude was not a problem, further demonstrated the ability to rendezvous with a passive object, and was the first Gemini mission to fire the Agena's own rocket. Michael Collins would be the first person to meet another spacecraft in orbit, during his second successful EVA.
Gemini 11 first direct-ascent (first orbit) rendezvous with an Agena Target Vehicle, docking with it 1 hour 34 minutes after launch. Set a crewed Earth orbital altitude record of in September 1966, using the Agena target vehicle's propulsion system. This record was broken in September 2024 by the Polaris Dawn mission.
On Gemini 12, Edwin "Buzz" Aldrin became the first space traveler to prove that useful work (EVA) could be done outside a spacecraft without life-threatening exhaustion, due to newly implemented footholds, handholds, and scheduled rest periods.
Rendezvous in orbit is not a straightforward maneuver. Should a spacecraft increase its speed to catch up with another, the result is that it goes into a higher and slower orbit and the distance thereby increases. The right procedure is to go to a lower orbit first and which increases relative speed, and then approach the target spacecraft from below and decrease orbital speed to meet it. To practice these maneuvers, special rendezvous and docking simulators were built for the astronauts.
Gemini-Titan launches and serial numbers
The Gemini-Titan II launch vehicle was adapted by NASA from the U.S. Air Force Titan II ICBM. (Similarly, the Mercury-Atlas launch vehicle had been adapted from the USAF Atlas missile.) The Gemini-Titan II rockets were assigned Air Force serial numbers, which were painted in four places on each Titan II (on opposite sides on each of the first and second stages). USAF crews maintained Launch Complex 19 and prepared and launched all of the Gemini-Titan II launch vehicles. Data and experience operating the Titans was of value to both the U.S. Air Force and NASA.
The USAF serial numbers assigned to the Gemini-Titan launch vehicles are given in the tables above. Fifteen Titan IIs were ordered in 1962 so the serial is "62-12XXX", but only "12XXX" is painted on the Titan II. The order for the last three of the 15 launch vehicles was canceled on July 30, 1964, and they were never built. Serial numbers were, however, assigned to them prospectively: 12568 - GLV-13; 12569 - GLV-14; and 12570 - GLV-15.
Program cost
From 1962 to 1967, Gemini cost $1.3 billion in 1967 dollars ($ in ). In January 1969, a NASA report to the US Congress estimating the costs for Mercury, Gemini, and Apollo (through the first crewed Moon landing) included $1.2834 billion for Gemini: $797.4 million for spacecraft, $409.8 million for launch vehicles, and $76.2 million for support.
Current location of hardware
Spacecraft
Gemini 1: Intentionally disintegrated upon re-entry to the atmosphere
Gemini 2: Air Force Space and Missile Museum, Cape Canaveral Air Force Station, Florida
Gemini III: Grissom Memorial, Spring Mill State Park, Mitchell, Indiana
Gemini IV: National Air and Space Museum, Washington, D.C.
Gemini V: Johnson Space Center, NASA, Houston, Texas
Gemini VI: Stafford Air & Space Museum, Weatherford, Oklahoma
Gemini VII: Steven F. Udvar-Hazy Center, Chantilly, Virginia
Gemini VIII: Armstrong Air and Space Museum, Wapakoneta, Ohio
Gemini IX: Kennedy Space Center, NASA, Merritt Island, Florida
Gemini X: Kansas Cosmosphere and Space Center, Hutchinson, Kansas
Gemini XI: California Museum of Science and Industry, Los Angeles, California
Gemini XII: Adler Planetarium, Chicago, Illinois
Trainers and boilerplates
Gemini 3A (2411): St. Louis Science Center, St. Louis, Missouri.
Gemini MOL-B (2411): National Museum of the United States Air Force, Wright-Patterson Air Force Base, Dayton, Ohio
Gemini Mission Simulator (5143): U.S. Space & Rocket Center, Huntsville, Alabama
Gemini Trainer: Discovery Center, Fresno, California
Gemini Trainer: Kentucky Science Center, Louisville, Kentucky
Gemini Water Egress Trainer: Texas Air Museum, Slaton, Texas
Gemini Trainer: Kalamazoo Air Museum, Kalamazoo, Michigan
Trainer: Pate Museum of Transportation, Fort Worth, Texas
GATV (6165): National Air and Space Museum, Washington, D.C. (not on display)
El Kabong: Kalamazoo Air Museum, Kalamazoo, Michigan
MSC 312: Private residence, Holden, MA
MSC 313: Private residence, San Jose, California
Paresev 1A (Rogallo Test Vehicle): Steven F. Udvar-Hazy Center, Chantilly, Virginia
TTV-1 (6873) paraglider capsule: Steven F. Udvar-Hazy Center, Chantilly, Virginia
TTV-2 paraglider capsule: Museum of Scotland, Edinburgh
Gemini boilerplate: Air Force Space and Missile Museum, Cape Canaveral Space Force Station, Florida
Gemini boilerplate: Air Force Space and Missile Museum, Cape Canaveral Space Force Station, Florida
Ingress/Egress Trainer: U.S. Space & Rocket Center, Huntsville, Alabama
MSC-307: USS Hornet Museum, former NAS Alameda, Alameda, California
Mockups and models
A number of detailed Gemini models and mockups are on display:
Gemini Model - Intrepid Sea, Air & Space Museum, New York, NY
Gemini Model - The Discovery Center, Fresno, CA
Gemini Model (built for From the Earth to the Moon)- Evergreen Aviation Museum, McMinnville, Oregon
Gemini Sit-in Model - KSC Visitors Center, Kennedy Space Center FL
Gemini Model - Science Museum Oklahoma, Oklahoma City, OK
Gemini Model (made by McDonnell) - Boeing Prologue Room, St. Louis, MO
Gemini Model (made by McDonnell) - Museum of Science & Industry, Chicago, IL
Gemini Sit-in Model - Neil Armstrong Air and Space Museum, Wapakoneta, OH
Gemini Mockup (winner of the 1967 Revell contest) - Oregon Museum of Science and Industry, Portland, OR
Gemini Model (made by McDonnell) - San Diego Air & Space Museum, San Diego, CA
Gemini Model - Stafford Air & Space Museum, Weatherford, OK
Proposed extensions and applications
Advanced Gemini
McDonnell Aircraft, the main contractor for Mercury and Gemini, was also one of the original bidders on the prime contract for Apollo, but lost out to North American Aviation. McDonnell later sought to extend the Gemini program by proposing a derivative which could be used to fly a cislunar mission and even achieve a crewed lunar landing earlier and at less cost than Apollo, but these proposals were rejected by NASA.
A range of applications were considered for Advanced Gemini missions, including military flights, space station crew and logistics delivery, and lunar flights. The Lunar proposals ranged from reusing the docking systems developed for the Agena Target Vehicle on more powerful upper stages such as the Centaur, which could propel the spacecraft to the Moon, to complete modifications of the Gemini to enable it to land on the lunar surface. Its applications would have ranged from crewed lunar flybys before Apollo was ready, to providing emergency shelters or rescue for stranded Apollo crews, or even replacing the Apollo program.
Some of the Advanced Gemini proposals used "off-the-shelf" Gemini spacecraft, unmodified from the original program, while others featured modifications to allow the spacecraft to carry more crew, dock with space stations, visit the Moon, and perform other mission objectives. Other modifications considered included the addition of wings or a parasail to the spacecraft, in order to enable it to make a horizontal landing.
Big Gemini
Big Gemini (or "Big G") was another proposal by McDonnell Douglas made in August 1969. It was intended to provide large-capacity, all-purpose access to space, including missions that ultimately used Apollo or the Space Shuttle.
The study was performed to generate a preliminary definition of a logistic spacecraft derived from Gemini that would be used to resupply an orbiting space station. Land-landing at a preselected site and refurbishment and reuse were design requirements. Two baseline spacecraft were defined: a nine-man minimum modification version of the Gemini B called Min-Mod Big G and a 12-man advanced concept, having the same exterior geometry but with new, state-of-the-art subsystems, called Advanced Big G. Three launch vehicles-Saturn IB, Titan IIIM, and Saturn INT-20 (S-IC/S-IVB) were investigated for use with the spacecraft.
Military applications
The Air Force had an interest in the Gemini system, and decided to use its own modification of the spacecraft as the crew vehicle for the Manned Orbital Laboratory. To this end, the Gemini 2 spacecraft was refurbished and flown again atop a mockup of the MOL, sent into space by a Titan IIIC. This was the first time a spacecraft went into space twice.
The USAF also thought of adapting the Gemini spacecraft for military applications, such as crude observation of the ground (no specialized reconnaissance camera could be carried) and practicing making rendezvous with suspicious satellites. This project was called Blue Gemini. The USAF did not like the fact that Gemini would have to be recovered by the US Navy, so they intended for Blue Gemini eventually to use the airfoil and land on three skids, carried over from the original design of Gemini.
At first some within NASA welcomed sharing of the cost with the USAF, but it was later agreed that NASA was better off operating Gemini by itself. Blue Gemini was canceled in 1963 by Secretary of Defense Robert McNamara, who decided the NASA Gemini flights could conduct necessary military experiments. MOL was canceled by Secretary of Defense Melvin Laird in 1969, when it was determined that uncrewed spy satellites could perform the same functions much more cost-effectively.
In media
Two Gemini capsules (codenamed "Jupiter" instead of "Gemini") are featured in the plot of the 1967 James Bond film You Only Live Twice.
A modified one-person Gemini capsule is used to send an astronaut (played by James Caan) to the Moon in the 1968 film Countdown.
Gemini missions 4, 8 and 12 feature in the first episode of the HBO series From the Earth to the Moon'
Like other US space programs, Gemini was covered in the 1985 PBS series "Spaceflight"
Some aspects of the Gemini program relating to astronaut Neil Armstrong were touched upon in the 2018 film First Man.
Many episodes of the television show I Dream of Jeannie featured launch pad and launch footage of various Gemini missions.
Gemini, is a layer 7 telecom standard named after the Gemini missions. Its non-standard port number, 1965, is a reference to the first mission date.
| Technology | Programs and launch sites | null |
883189 | https://en.wikipedia.org/wiki/Massless%20particle | Massless particle | In particle physics, a massless particle is an elementary particle whose invariant mass is zero. At present the only confirmed massless particle is the photon.
Other particles and quasiparticles
Standard Model gauge bosons
The photon (carrier of electromagnetism) is one of two known gauge bosons that are both believed to be massless; the other is the gluon (carrier of the strong force). The only other confirmed gauge bosons are the W and Z bosons, which are known from experiment to be extremely massive. Of these, only the photon has been experimentally confirmed to be massless.
Although there are compelling theoretical reasons to believe that gluons are massless, they can never be observed as free particles due to being confined within hadrons, and hence their presumed lack of rest mass cannot be confirmed by any feasible experiment.
Hypothetical graviton
The graviton is a hypothetical tensor boson proposed to be the carrier of gravitational force in some quantum theories of gravity, but no such theory has been successfully incorporated into the Standard Model, so the Standard Model neither predicts any such particle nor requires it, and no gravitational quantum particle has been indicated by experiment. Whether a graviton would be massless if it existed is likewise an open question.
Quasiparticles
The Weyl fermion discovered in 2015 is also expected to be massless,
but these are not actual particles. At one time neutrinos were thought to perhaps be Weyl fermions, but when they were discovered to have mass, that left no fundamental particles of the Weyl type.
The Weyl fermions discovered in 2015 are merely quasiparticles – composite motions found in the structure of molecular latices that have particle-like behavior, but are not themselves real particles. Weyl fermions in matter are like phonons, which are also quasiparticles. No real particle that is a Weyl fermion has been found to exist, and there is no compelling theoretical reason that requires them to exist.
Neutrinos were originally thought to be massless – and possibly Weyl fermions. However, because neutrinos change flavour as they travel, at least two of the types of neutrinos must have mass (and cannot be Weyl fermions).
The discovery of this phenomenon, known as neutrino oscillation, led to Canadian scientist Arthur B. McDonald and Japanese scientist Takaaki Kajita sharing the 2015 Nobel Prize in Physics.
| Physical sciences | Subatomic particles: General | Physics |
883363 | https://en.wikipedia.org/wiki/Dicyemida | Dicyemida | Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites that live in the renal appendages of cephalopods.
Taxonomy
Classification is controversial. Traditionally, dicyemids have been grouped with the Orthonectida in the phylum Mesozoa and, from 2017, molecular evidence appears to confirm this.
However, other molecular phylogenies have placed the dicyemids more closely related to the roundworms. Additional molecular evidence suggests that this phylum is derived from the Lophotrochozoa.
The phylum (or class if retained within Mesozoa) contains three families, Conocyemidae, Dicyemidae and Kantharellidae, which have sometimes been further grouped into orders. Authors who treat Dicyemida as an order and separate the family Conocyemidae into a different order (Heterocyemida) prefer 'Rhombozoa' as a more inclusive name for the phylum or class.
Anatomy
Adult dicyemids range in length from , and they can be easily viewed through a light microscope. They display eutely, a condition in which each adult individual of a given species has the same number of cells, making cell number a useful identifying character. Dicyemida lack respiratory, circulatory, excretory, digestive, and nervous systems.
The organism's structure is simple: a single axial cell is surrounded by a jacket of twenty to thirty ciliated cells. The anterior region of the organism is termed a calotte and functions to attach the parasite to folds on the surface of its host's renal appendages. When more than one species of dicyemida exist within the same host, they have distinctly shaped calottes, which range in shape from conical to disk shaped, or
cap shaped.
To this day, there has never been a recorded case of two separate species of dicyemida existing in the same host and having exactly the same calotte. Species that share similar or even identical calottes have been found on occasion, but have never been found within the same host. Because of the constant variation in calotte size between species (even within one given host) there is very rarely observable competition between the multiple Dicyemida species for habitat or other resources. Calotte shape determines where a dicyemid can comfortably live. In general, dicyemida with conical shaped calottes fit best within the folds of the kidneys, while those with rounded calottes (disk or cap shaped) are more easily able to attach to the smooth surfaces of the kidneys. This extreme segregation of habitats allows multiple species of dicyemids to comfortably exist within the same host while not still competing for space or resources (by occupying different ecological niches).
Habitat
While most dicyemid species have been found to prefer to live within specific cephalopods, no one species is unique in their preferences. In fact, It is also almost unheard of that a host infected with a dicyemid is only infected with one species. This means that if a select cephalopod is found to be infected with one species Dicyemid, their body will likely be found to contain organisms with a variety of calotte shapes, which means they are infected with multiple different species. On the occasion that similar (but not identical) calotte shapes happen to be present within one host’s body, one species usually ends up dominating the other, indicating that it has adapted more readily to the environment within the host. However, this occurrence is very rare and has only been observed a handful of times. In a study done on octopuses, it was found that Dicyemida that had similarly shaped calottes rarely coexisted in the same individual host, which suggested a strong level of competition for habitat.
In Japan, two types of dicyemid parasites, D. misakiense and D. japonicum, have often been discovered living in the same host. In 1938, when the two species were initially discovered, scientists did not classify them as separate species due to their large amount of morphological
similarities. In fact, the only difference between the two species that scientists were able to observe was between the shape of their calottes.The
idea that D. misakiense and D. japonicum are two different species is still very controversial among scientific groups. Some scientists have speculated that when closely related species of dicyemids coexist in the same region, such as in the case of D. misakiense and D. japonicum, competition for habitat causes them to evolve to develop two distinct calotte shapes.
Life cycle
Dicyemids exist in both asexual and sexual forms. The former predominate in juvenile and immature hosts, and the latter in mature hosts. The asexual stage is termed a nematogen; it produces vermiform larvae within the axial cell. These mature through direct development to form more nematogens. Nematogens proliferate in young cephalopods, filling the kidneys.
As the infection ages, perhaps as the nematogens reach a certain density, vermiform larvae mature to form rhombogens, the sexual life stage, rather than more nematogens. This sort of density-responsive reproductive cycle is reminiscent of the asexual reproduction of sporocysts or rediae in larval trematode infections of snails. As with the trematode asexual stages, a few nematogens can usually be found in older hosts. Their function may be to increase the population of the parasite to keep up with the growth of the host.
Rhombogens contain hermaphroditic gonads developed within the axial cell. These gonads, more correctly termed infusorigens, self-fertilise to produce infusoriform larvae. These larvae possess a very distinctive morphology, swimming about with ciliated rings that resemble headlights. It has long been assumed that this sexually produced infusoriform, which is released when the host eliminates urine from the kidneys, is both the dispersal and the infectious stage. The mechanism of infection, however, remains unknown, as are the effects, if any, of dicyemids on their hosts.
Some part of the dicyemid life cycle may be tied to temperate benthic environments, where they occur in greatest abundance. While dicyemids have occasionally been found in the tropics, the infection rates are typically quite low, and many potential host species are not infected. Dicyemids have never been reported from truly oceanic cephalopods, who instead host a parasitic ciliate fauna. Most dicyemid species are recovered from only one or two host species. While not strictly host specific, most dicyemids are only found in a few closely related hosts.
| Biology and health sciences | Spiralia | Animals |
883468 | https://en.wikipedia.org/wiki/Grove%20%28nature%29 | Grove (nature) | A grove is a small group of trees with minimal or no undergrowth, such as a sequoia grove, or a small orchard planted for the cultivation of fruits or nuts. Other words for groups of trees include woodland, woodlot, thicket, and stand. A grove may be called an 'arbour' or 'arbor' (see spelling differences), which is not to be confused with the garden structure pergola, which also sometimes goes under that name.
Name
The main meaning of grove is a group of trees that grow close together, generally without many bushes or other plants underneath. It is an old word in the English language, with records of its use dating as far back as the late 9th century as Old English grāf, grāfa ('grove; copse') and subsequently Middle English grove, grave; these derive from Proto-West Germanic *graib, *graibō ('branch, group of branches, thicket'), from Proto-Germanic *graibaz, *graibô ('branch, fork').
It is related to Old English grǣf, grǣfe ('brushwood; thicket; copse'), Old English grǣfa ('thicket'), dialectal Norwegian greive ('ram with splayed horns'), dialectal Norwegian greivlar ('ramifications of an antler'), dialectal Norwegian grivla ('to branch, branch out'), Old Norse grein ('twig, branch, limb'), and cognate with modern English greave.
Cultivation
Naturally-occurring groves are typically small, perhaps a few acres at most. In contrast, orchards, which are normally intentional planting of trees, may be small or very large, like the apple orchards in Washington state, and orange groves in Florida.
Cultural significance
Historically, groves were considered sacred in pagan, pre-Christian Germanic and Celtic cultures. Helen F. Leslie-Jacobsen argues that "we can assume that sacred groves actually existed due to repeated mentions in historiographical and ethnographical accounts. e.g. Tacitus, Germania."
| Physical sciences | Forests | Earth science |
884402 | https://en.wikipedia.org/wiki/Cacomistle | Cacomistle | The cacomistle (; Bassariscus sumichrasti), also spelled cacomixtle, is a primarily nocturnal, arboreal, omnivorous member of the carnivoran family Procyonidae (coatis, kinkajous and raccoons). Depending on the location, its preferred habitats are humid and tropical evergreen jungle and montane cloud forests; seasonally, it may venture into drier, deciduous forests.
Although its total population is listed as being of "least concern" (i.e., stable), the cacomistle is still a highly cryptic, secretive animal, and generally an uncommon sight throughout much of its range (from southern México to western Panamá); this fact is especially true in Costa Rica, where it inhabits only a very small area. Additionally, the species is completely dependent on trees and dense vegetation for habitat, making it particularly susceptible to deforestation.
The name cacomistle comes from the Nahuatl language (tlahcomiztli) and means "half-cat" or "half-puma"; the same name is also given, by some, to the North American Bassariscus astutus, more commonly known as the ringtail (or, semi-inaccurately, ringtail 'cat'). This "sister species" of the cacomistle inhabits a much more northerly and less tropical range, from arid Northern Mexico into the Southwestern United States.
Taxonomy
The cacomistle is one of two extant species in the genus Bassariscus, along with its close relative, the North American ringtail (Bassariscus astutus). Together, they form the Procyoninae, a subfamily of the greater Procyonidae of the Carnivora order, thus placing them with raccoons, coatis, olingos and kinkajous.
Currently, six regional subspecies of Bassariscus sumichrasti are recognized:
Campeche cacomistle (Bassariscus sumichrasti campechensis)
Central American cacomistle (B. s. sumichrasti)
Guerrero cacomistle (B. s. latrans)
Northern Central American cacomistle (B. s. variabilis)
Oaxaca cacomistle (B. s. oaxacensis)
Panamá cacomistle (B. s. notinus)
Description
Bassariscus sumichrasti can grow to around 38–47 cm long, followed by a tail of roughly the same length or longer, adding an additional 39–53 cm to the entire animal's body length. The male cacomistle is often slightly longer-bodied than the female; however, both males and females weigh about the same, usually between 1 and 1.5 kg. Their bodies are usually covered in grey or light brownish fur, in stark contrast to the black-and-white, striped tail. The tail markings are most defined near the animal's posterior end, gradually fading to a solid black at the tip of the tail.
To the untrained eye, Bassariscus sumichrasti may be visually confused with its close relative, Bassariscus astutus, the ringtail; however, in addition to a more northerly distribution, the ringtail, unlike the cacomistle, does not have retractable claws. The cacomistle can also be identified by its faded tail markings and ears that end in a distinct point.
Distribution and habitat
The cacomistle inhabits the tropical and subtropical forests of North America (Mexico) and south into Central America, ranging through Panama. These animals are quite solitary and thus spread themselves out, with each cacomistle having a home range of at least 20 hectares (an area equivalent to 20 sports fields) and are typically seen in the middle and upper levels of the canopy. Throughout their broad range this species is found to inhabit a wide variety of different forest ecosystems. In Mexico, the cacomistle tends to avoid oak forests, secondary forest, and overgrown pastures, but in Costa Rica, the cacomistle has been shown to favor those exact habitats.
Diet
The cacomistle is usually considered a generalist species, as it can survive on a wide variety of different foods. Their diet varies from season-to-season, consisting primarily of fruits, flowers, nectar, invertebrates and also some small vertebrates, such as lizards, frogs, toads, and rodents. The specificity of these food options depends on what is available in the particular habitat in which an individual dwells. The various genera of bromeliads (Bromeliaceae family) found throughout the cacomistle's range are often an excellent source for food, especially in the southern end of the species' range, as these plants naturally collect rain water, which in turn brings insects and many small animals found high in the canopy; in addition, the bromeliad itself is often consumed by some omnivorous species.
Reproduction
Mating season is the only time cacomistles interact with each other, and it is only briefly as the female is only receptive to male approaches for one day. After mating, the female cacomistle undergoes a gestation period of approximately two months before giving birth to a single offspring. When the cub is three months old it is weaned, and then taught hunting and survival skills by its mother before going off to develop its own territory.
| Biology and health sciences | Procyonidae | Animals |
884437 | https://en.wikipedia.org/wiki/Artesian%20well | Artesian well | An artesian well is a well that brings groundwater to the surface without pumping because it is under pressure within a body of rock or sediment known as an aquifer. When trapped water in an aquifer is surrounded by layers of impermeable rock or clay, which apply positive pressure to the water, it is known as an artesian aquifer. If a well were to be sunk into an artesian aquifer, water in the well-pipe would rise to a height corresponding to the point where hydrostatic equilibrium is reached.
A well drilled into such an aquifer is called an artesian well. If water reaches the ground surface under the natural pressure of the aquifer, the well is termed a flowing artesian well.
Fossil water aquifers can also be artesian if they are under sufficient pressure from the surrounding rocks, similar to how many newly tapped oil wells are pressurized.
Not all aquifers are artesian (i.e., water table aquifers occur where the groundwater level at the top of the aquifer is at equilibrium with atmospheric pressure). Aquifers recharge when the water table at its recharge zone is at a higher elevation than the head of the well.
History
The first mechanically accurate explanation for artesian wells was given by Al-Biruni. Artesian wells were named after Artois in France, where many artesian wells were drilled by Carthusian monks from 1126.
| Physical sciences | Hydrology | Earth science |
884809 | https://en.wikipedia.org/wiki/Pikaia | Pikaia | Pikaia gracilens is an extinct, primitive chordate marine animal known from the Middle Cambrian Burgess Shale of British Columbia. Described in 1911 by Charles Doolittle Walcott as an annelid, and in 1979 by Harry B. Whittington and Simon Conway Morris as a chordate, it became "the most famous early chordate fossil", or "famously known as the earliest described Cambrian chordate". It is estimated to have lived during the latter period of the Cambrian explosion. Since its initial discovery, more than a hundred specimens have been recovered.
The body structure resembles that of the lancelet and it swam perhaps much like an eel. A notochord and myomeres (segmented blocks of skeletal muscles) span the entire length of the body, and are considered the defining signatures of chordate characters. Its primitive nature is indicated by the body covering, a cuticle, which is characteristic of invertebrates and some protochordates. A reinterpretation in 2024 found evidence of the gut canal, dorsal nerve cord and myomeres, and suggested that the taxon was previously interpreted upside down.
The exact phylogenetic position is unclear, though recent studies suggest that it is likely a stem-chordate with crown group traits. Previously proposed affinities include those of cephalochordata, craniata, or a stem-chordate not closely related to any extant lineage. Popularly but falsely attributed as an ancestor of all vertebrates, or the oldest fish, or the oldest ancestor of humans, it is generally viewed as a basal chordate alongside other Cambrian chordates; it is a close relative of vertebrate ancestors but it is not an ancestor itself.
Discovery
The fossils of Pikaia gracilens was discovered by Charles Walcott from the Burgess shale member of the Stephen formation in British Columbia, and described it in 1911. He named it after Pika Peak, a mountain in Alberta, Canada. Based on the obvious and regular segmentation of the body, as is the feature of annelids, Walcott classified it as a polychaete worm and created a new family Pikaidae for it. (Princeton palaeontologist Benjamin Franklin Howell changed the name of the family to Pikaiidae in 1962.) Walcott was aware of the limitation of his classification, as he noted: "I am unable to place it within any of the families of the Polychaeta, owing to the absence of parapodia [paired protrusions on the sides of polychaete worms] on the body segments back of the fifth."
University of Cambridge palaeontologist Harry B. Whittington and his student Simon Conway Morris re-examined the Burgess Shale fauna and noted the anatomical details of Pikaia for the first time. The fossil specimens bears features of notochord and muscle blocks that are fundamental structures of chordates, and not of annelids. In 1977, Conway Morris presented a paper that indicated the possible chordate position, without further explanation. He and Whittington were convinced that the animal was obviously a chordate, as they wrote in Scientific American in 1979:Finally, we find among the Burgess Shale fauna one of the earliest-known invertebrate representatives of our own conspicuous corner of the animal kingdom: the chordate phylum... The chordates are represented in the Burgess Shale by the genus Pikaia and the single species P. gracilens.Conway Morris formally placed P. gracilens among the chordates in a paper in the Annual Review of Ecology and Systematics that same year. However, he provided no structural analyses such as using microscopes to confirm the chordate features. The comparative description only earned a "putative" chordate status. The fossil's chordate nature was received sceptically for several decades. Only in 2012, when detailed analysis was reported by Conway Morris and Jean-Bernard Caron, that the chordate position became generally accepted.
The fossils are found only in a restricted series of horizons in the strata exposed on Fossil Ridge, close to the Yoho National Park. From the same location, other fish-like animal fossils named Metaspriggina were discovered in 1993. Conway Morris identified the animals as another Cambrian chordate. The fossil specimens are preserved in the Smithsonian Institution and the Royal Ontario Museum.
Description
Pikaia has a lancelet-like body, tapering at both ends, laterally flat and lacked a well-defined head. It measures an average of about in length. Walcott recorded the longest individuals as in length. Pikaia has a pair of large, antenna-like tentacles on its head that resembles those of invertebrates such as snails. The attachment of the tentacles makes a two-lobed structure of the head. The tentacles may be comparable to those in the present-day hagfish, a jawless chordate. It has a small circular mouth that could be used to eat small food particles in a single bite. There is a series of short appendages on either side of the underside of the head just after the mouth, and their exact nature or function is unknown. The pharynx is associated with six pairs of slits with tiny filaments that could be used for respiratory apparatus. In these ways, it differs from the modern lancelets, which have distinct pharyngeal gill slits on either sides of the pharynx and are used for filter feeding.
A major primitive structure of Pikaia is a cuticle as its body covering. Cuticle is a hard protein layer predominantly found in invertebrates such as arthropods, molluscs, echinoderms and nematodes. Unlike a typical cuticle, the cuticle of Pikaia does not have hard extracellular (exoskeleton) protection, and the entire body is essentially soft-bodied. Although primitive, Pikaia shows the essential prerequisites for vertebrates. When alive, Pikaia was a compressed, leaf-shaped animal with an expanded tail fin; the flattened body is divided into pairs of segmented muscle blocks, seen as faint vertical lines. The muscles lie on either side of a flexible structure resembling a rod that runs from the tip of the head to the tip of the tail.
Pikaia was an active and free swimmer. It likely swam by throwing its body into a series of S-shaped, zigzag curves, similar to the movement of eels; fish inherited the same swimming movement, but they generally have stiffer backbones. These adaptations may have allowed Pikaia to filter particles from the water as it swam along. Pikaia was probably a slow swimmer, since it lacked the fast-twitch fibers that are associated with rapid swimming in modern chordates.
Reinterpretations
Walcott's original summary of the description of Pikaia reads:Body elongate, slender, and tapering at each end. It is formed of many segments that are defined by strong annular shiny lines. Head small with two large eyes and two tentacles... Back of the head the first five segments carry short parapodia that appear to be divided into two parts. The enteric canal extends from end to end without change in character... This was one of the active, free-swimming annelids that suggest the Nephthydidae of the Polychaeta.Whittington and Conway Morris were the first to realise that Walcott's description and classification were not reliable and mostly inaccurate. They compared the body segments as described by Walcott with living animals and found that they were similar to the muscle bundles of chordates such as the living Amphioxus (Branchiostoma) as well as fishes, and not to superficial segments of annelids. They pictured that the muscles would be essential for swimming in water in wriggling motions. The enteric canal as observed by Walcott was not an ordinary digestive tract, it runs along with a stiff rod that resembles a notochord. They reported in 1979: "Although Pikaia differs from Amphioxus, in several important respects, the conclusion is that it is not a worm but a chordate appears inescapable."
Conway Morris was convinced that the longitudinal rod was a notochord and the segments were muscle blocks that he concluded that Pikaia "is a primitive chordate rather than a polychaete. The earliest fish scales are Upper Cambrian, and Pikaia may not be far removed from the ancestral fish." In 1982, he added further description in his Atlas of the Burgess Shale that Pikaia had one or more fins, but did not specify where they were present.
Pikaia was not popularly known as a chordate fossil or as an ancient chordate until 1989. That year, Harvard University palaeontologist Stephen Jay Gould wrote in his book Wonderful Life: The Burgess Shale and the Nature of History: "Pikaia is not an annelid worm. It is a chordate, a member of our own phylum—in fact, the first recorded member of our immediate ancestry." From this remark Pikaia became generally recognised as a chordate and ancestor of vertebrates.
In 1993, Conway Morris came up with another possible chordate feature. He identified structures that looked like gill slits but gave a cautious remark: "[They] may have been present, but are hard to identify with certainty in the compressed material available. The tiny pores on the side of the pharynx are normally gill slits in living chordates. He also noticed that Pikaia is similar to Amphioxus in most general aspects, with major difference in its notochord not reaching the anterior end.
Not all palaeontologists were convinced of the chordate designation without better analysis. In 2001, Nicholas D. Holland from the Scripps Institution of Oceanography and Junyuan Chen from the Chinese Academy of Sciences criticised the presentation in Wonderful Life, saying that the "reinterpretation [of Pikaia as a chordate] became almost universally accepted after its unqualified and forceful endorsement by Gould"; concluding that "the cephalochordate affinity of Pikaia is at best only weakly indicated by the characters visible in fossils discovered so far." In 2010, an international team of palaeontologists argued that Pikaia has sufficiently invertebrate characters, and that it mostly look like a much younger extinct animal, the Tully monster (Tullimonstrum gregarium), which is still debated as either an invertebrate or a chordate.
Another component of Pikaia fossils that constrains the animal to be accepted as a chordate is its distinct invertebrate character; its preservational mode suggests that it had cuticle. The cuticle as a body covering is uncharacteristic of the vertebrates, but is a dominant feature of invertebrates. The presence of earlier chordates among the Chengjiang, including Haikouichthys and Myllokunmingia, appears to show that cuticle is not necessary for preservation, overruling the taphonomic argument, but the presence of tentacles remains intriguing, and the organism cannot be assigned conclusively, even to the vertebrate stem group. Its anatomy closely resembles the modern creature Branchiostoma.
A fossil species Myoscolex ateles, discovered in 1979 from Cambrian Emu Bay shale of Kangaroo Island in South Australia, had been debated as among the oldest annelids, or at least other invertebrate groups. Polish palaeontologist Jerzy Dzik in his formal description in 2003 notes that it "closely resembles the slightly geologically younger Pikaia" in having smooth cuticle as well as muscular segmentation, and projections on its backside (ventral chaetae) that look like Pikaia's tentacles. He concluded:In fact, there is little evidence for chordate affinities of Pikaia. Its relationship with Myoscolex [as annelid in his proposition] appears a much better solution. Both were initially identified as polychaetes and this line of inference perhaps deserves confrontation with more recent evidence than that available to the authors who proposed these genera.
Comprehensive description
The first comprehensive description of Pikaia was published by Conway Morris and Jean-Bernard Caron in the May 2012 issue of Biological Reviews. The anatomical examination and interpretation based on 114 fossil specimens confirm the classification as a chordate. According to the new assessment, Pikaia fossils indicate important features that define the animal as a primitive chordate. All Pikaia fossils are in the range of in length, with an average of . Having a laterally compressed (taller than wide) and fusiform (tapering at both ends) body, the exact width and height are variable, and normally its height is twice that of its width throughout it body.
The head is bilaterally symmetrical with a distinct pair of tentacles. Due to its small size, only about 1 mm in diameter, the structural details are indistinguishable. Some specimens show a darker central line on the tentacles which may represent a nervous fibre; thus making the tentacles as sensory feelers. A mouth is marked by a small opening at the anterior end of the gut towards the underside of the head. There are no jaws and teeth. Walcott had mentioned the presence of two large eyes, but no specimens, including Walcott's original collection, show any evidence of eyes.
One of the most unusual body parts is a series of appendages just posterior to the tentacles. Walcott had called the appendages parapodia, as a kind of body protrusions that aid locomotion in snails, and mentioned five parapodia in each individual. He was even puzzled by the absence on the major part of the body, with other specimens having up to nine such appendages that could not be parapodia. These external appendages were reinterpreted as gills in a 2024 study. Fins are present as an expansion of the body on the dorsal and ventral sides. They are not present in many specimens indicating that they are delicate membranes and were lost during fossilisation. However, the 2024 study suggested that Pikaia was previously interpreted upside down, indicating that the 'dorsal and ventral' sides of Pikiaia were actually inverted.
The backside of Pikaia fossils show a hollow tubular structure that extends throughout most of the body length, but not the anterior region. It is easily noticeable as a highly light-reflective portion and is known as the dorsal organ. Once described as the notochord, its nature is not yet fully resolved and could be a storage organ. The true notochord, along with a nerve cord, is a fine lateral line that runs just beneath the thick dorsal organ. A 2024 study instead found evidence of the gut canal, dorsal nerve cord and myomeres from the specimens, providing more evidence with diagnostic features that Pikaia is a chordate.
The main chordate character is a series of myomeres that extends from the anterior to the posterior region. On average, there are 100 such myomeres in each individual. The muscle segments are not simply "annular shiny lines" as Walcott described, but are in concentric bends in the form of V-shaped chevron. The myomeres at the anterior end as simpler in appearance and show circular arrangement. Conway Morris and Caron concluded:Whilst the possibility that Pikaia is simply convergent on the chordates cannot be dismissed, we prefer to build a scenario that regards Pikaia as the most stem-ward of the chordates with links to the phylogenetically controversial yunnanozoans. This hypothesis has implications for the evolution of the myomeres, notochord and gills.
Evolutionary importance
Much debate on whether Pikaia is a vertebrate ancestor, its worm-like appearance notwithstanding, exists in scientific circles. It looks like a worm that has been flattened sideways (lateral compression). The fossils compressed within the Burgess Shale show chordate features such as traces of an elongate notochord, dorsal nerve cord, and blocks of muscles (myotomes) down either side of the body – all critical features for the evolution of the vertebrates.
The notochord, a flexible rod-like structure that runs along the back of the animal, lengthens and stiffens the body so that it can be flexed from side to side by the muscle blocks for swimming. In the fish and all subsequent vertebrates, the notochord forms the backbone (or vertebral column). The backbone strengthens the body, supports strut-like limbs, and protects the vital dorsal nerve cord, while at the same time allowing the body to bend.
A Pikaia lookalike, the lancelet Branchiostoma, still exists today. With a notochord and paired muscle blocks, the lancelet and Pikaia belong to the chordate group of animals from which the vertebrates descended. Molecular studies have refuted earlier hypotheses that lancelets might be the closest living relative to the vertebrates, instead favoring tunicates in this position; other extant and fossil groups, such as acorn worms and graptolites, are more primitive.
The presence of cuticle, one of the principal characters of higher invertebrates, in Pikaia can be understood from the evolutionary trends. A Cambrian invertebrate, Myoscolex ateles was described to be structurally similar to Pikaia particularly in having smooth cuticle as well as muscular segmentation, and projections on its backside (ventral chaetae) that look like Pikaia's tentacles. Although chordates normally lack the cuticle, a type of cuticle is present in some cephalochordates, indicating that primitive characters are retained in lower chordates.
Subsequently, Mallatt and Holland reconsidered Conway Morris and Caron's description, and concluded that many of the newly recognized characters are unique, already-divergent specializations that would not be helpful for establishing Pikaia as a basal chordate.
Development of the head
The first sign of head development, cephalization, is seen in chordates such as Pikaia and Branchiostoma. It is thought that development of a head structure resulted from a long body shape, a swimming habit, and a mouth at the end that came into contact with the environment first, as the animal swam forward. The search for food required ways of continually testing what lay ahead so it is thought that anatomical structures for seeing, feeling, and smelling developed around the mouth. The information these structures gathered was processed by a swelling of the nerve cord (efflorescence) – the precursor of the brain. Altogether, these front-end structures formed the rather indistinct heads of these chordates during the Cambrian period.
Evolutionary interpretation
Once thought to be closely related to the ancestor of all vertebrates, Pikaia has received particular attention among the multitude of animal fossils found in the famous Burgess Shale and other Cambrian fauna. In 1979, Whittington and Conway Morris first explained the evolutionary importance of Pikaia. Realising the fossil to be that of a chordate in the Cambrian rocks, chordates could have originated much earlier than expected, as they commented: "The superb preservation of this Middle Cambrian organism [Pikaia] makes it a landmark history of the phylum [Chordata] to which all vertebrates, including man, belong." It is for this knowledge Pikaia as an old chordate that it is often misleadingly and falsely attributed to as an ancestor of all vertebrates, or the oldest fish, or the oldest ancestor of humans.
Before Pikaia and other Cambrian chordates were fully appreciated, it was generally believed that the first chordates appeared much later, such as in Ordovician (484–443 mya). The establishment of Cambrian chordates, according to Stephen Jay Gould, prompted "revised views of evolution, ecology and development," and remarked: "So much for chordate uniqueness marked by slightly later evolution." However, Gould did not believe that Pikaia itself was unique as an early chordate or that it was "the actual ancestor of vertebrates;" he presumed that there could be undiscovered fossils that are more closely linked to vertebrate ancestry.
Gould's interpretation and evolutionary contingency
Gould, in his presidential address of the Paleontological Society on 27 October 1988, cited Pikaia to explain the trends of evolutionary changes:Wind back life's tape to the Burgess (first erasing what actually came after), let it play again, and this time a quite different cast may emerge. If the cast lacked Pikaia, the first chordate, we might not be here—and the world would be no worse... Let us thank our lucky stars for the survival of Pikaia.
He elaborated the same idea in "An epilogue on Pikaia" in his book Wonderful Life "to save the best for the last," in which he made a statement:Pikaia is the missing and final link in our story of contingency—the direct connection between Burgess decimation and eventual human evolution... Wind the tape of life back to Burgess times, and let it play again. If Pikaia does not survive in the replay, we are wiped out of future history—all of us, from shark to robin to orangutan...
And so, if you wish to ask the question of the age—why do humans exist?—a major part of the answer, touching those aspects of the issue that science can treat at all, must be: because Pikaia survived the Burgess decimation.This interpretation that the chances of evolutionary products are unpredictable is known as evolutionary contingency. Gould, from this statement, is regarded as "the most famous proponent" of the concept. His idea has inspired many research involving evolutionary contingency from palaeontology to molecular biology. He used Pikaia among the Cambrian animals as an epitome of contingent event in the entire evolution of life; if Pikaia had not existed, the rest of chordate animals might not have evolved, thus completely changing the diversity of life as we know. According to him, contingency is a major factor that drives large-scale evolution (macroevolution) and dictates that evolution has no inevitable destiny or outcome. However, as Gould explained, "The bad news is that we can't possibly perform the experiment."
Ecology
Pikaia is suggested to have been an active swimming organism that swam close to the seafloor (nektobenthic) using side to side undulations of its flattened posterior for propulsion. The anterior appendages are unlikely to have been used in feeding, and may have had a respiratory function. Pikaia is suggested to have fed on small particles of organic matter.
| Biology and health sciences | Prehistoric agnathae and early chordates | Animals |
884829 | https://en.wikipedia.org/wiki/Toxocariasis | Toxocariasis | Toxocariasis is an illness of humans caused by the dog roundworm (Toxocara canis) and, less frequently, the cat roundworm (Toxocara cati). These are the most common intestinal roundworms of dogs, coyotes, wolves and foxes and domestic cats, respectively. Humans are among the many "accidental" or paratenic hosts of these roundworms.
While this zoonotic infection is usually asymptomatic, it may cause severe disease. There are three distinct syndromes of toxocariasis: covert toxocariasis is a relatively mild illness very similar to Löffler's syndrome. It is characterized by fever, eosinophilia, urticaria, enlarged lymph nodes, cough, bronchospasm, wheezing, abdominal pain, headaches, and/or hepatosplenomegaly. Visceral larva migrans (VLM) is a more severe form of the disease; signs and symptoms depend on the specific organ system(s) involved. Lung involvement may manifest as shortness of breath, interstitial lung disease, pleural effusion, and even respiratory failure. Brain involvement may manifest as meningitis, encephalitis, or epileptic seizures. Cardiac involvement may manifest as myocarditis. Ocular larva migrans (OLM) is the third syndrome, manifesting as uveitis, endophthalmitis, visual impairment or even blindness in the affected eye.
Signs and symptoms
Physiological reactions to Toxocara infection depend on the host's immune response and the parasitic load. Most cases of Toxocara infection are asymptomatic, especially in adults. When symptoms do occur, they are the result of migration of second-stage Toxocara larvae through the body.
Covert toxocariasis
Covert toxocariasis is the least serious of the three syndromes and is believed to be due to chronic exposure. Signs and symptoms of covert toxocariasis are coughing, fever, abdominal pain, headaches, and changes in behavior and ability to sleep. Upon medical examination, wheezing, hepatomegaly, and lymphadenitis are often noted.
Visceral larva migrans
High parasitic loads or repeated infection can lead to visceral larva migrans (VLM). VLM is primarily diagnosed in young children because they are more prone to exposure and ingestion of infective eggs. Toxocara infection commonly resolves itself within weeks, but chronic eosinophilia may result. In VLM, larvae migration incites inflammation of internal organs and sometimes the central nervous system. Symptoms depend on the organs affected. Children can present with pallor, fatigue, weight loss, anorexia, fever, headache, urticaria skin rash, cough, asthma, chest tightness, increased irritability, abdominal pain, nausea, and vomiting. Sometimes the subcutaneous migration tracks of the larvae can be seen. Children are commonly diagnosed with pneumonia, bronchospasms, chronic pulmonary inflammation, hypereosinophilia, hepatomegaly, hypergammaglobulinaemia (IgM, IgG, and IgE classes), leukocytosis, and elevated anti-A and anti-B isohaemagglutinins. Severe cases have occurred in people who are hypersensitive to allergens; in rare cases, epilepsy, inflammation of the heart, pleural effusion, respiratory failure, and death have resulted from VLM.
Ocular larva migrans
Ocular larva migrans (OLM) is rare compared with VLM. A light Toxocara burden is thought to induce a low immune response, allowing a larva to enter the host's eye. Although there have been cases of concurrent OLM and VLM, these are extremely exceptional. OLM often occurs in just one eye and from a single larva migrating into and encysting within the orbit. Loss of vision occurs over days or weeks. Other signs and symptoms are red eye, white pupil, fixed pupil, retinal fibrosis, retinal detachment, inflammation of the eye tissues, retinal granulomas, and strabismus. Ocular granulomas resulting from OLM are frequently misdiagnosed as retinoblastomas. Toxocara damage in the eye is permanent and can result in blindness.
Other
Skin manifestations commonly include chronic urticaria, chronic pruritus, and miscellaneous forms of eczema.
A case study published in 2008 supported the hypothesis that eosinophilic cellulitis may also be caused by infection with Toxocara: the adult patient presented with eosinophilic cellulitis, hepatosplenomegaly, anemia, and a positive ELISA for T. canis.
Cause
Transmission
Toxocara is usually transmitted to humans through ingestion of infective eggs. T. canis can lay around 200,000 eggs per day. These eggs are passed in cat or dog feces, but the defecation habits of dogs cause T. canis transmission to be more common than that of T. cati. Both Toxocara canis and Toxocara cati eggs require a several week incubation period in moist, humid weather outside a host before becoming infective, so fresh eggs cannot cause toxocariasis.
Many objects and surfaces can become contaminated with infectious Toxocara eggs. Flies that feed on feces can spread Toxocara eggs to surfaces or foods. Young children who put contaminated objects in their mouths or eat dirt (pica) are at risk of developing symptoms. Humans can also contaminate foods by not washing their hands before eating.
Humans are not the only accidental hosts of Toxocara. Eating undercooked rabbit, chicken, or sheep can lead to infection; encysted larvae in the meat can become reactivated and migrate through a human host, causing toxocariasis. Special attention should be paid to thoroughly cooking giblets and liver to avoid transmission.
Incubation period
The incubation period for Toxocara canis and cati eggs depends on temperature and humidity. T. canis females, specifically, are capable of producing up to 200,000 eggs a day that require 2–6 weeks minimum up to a couple months before full development into the infectious stage. Under ideal summer conditions, eggs can mature to the infective stage after two weeks outside of a host. Provided sufficient oxygen and moisture availability, Toxocara eggs can remain infectious for years, as their resistant outer shell enables protection from most environmental threats. However, as identified in a case study presented within the journal of helminthology, the second stage of larvae development poses strict vulnerabilities to certain environmental elements. High temperatures and low moisture levels will quickly degrade the larvae during this developmental stage.
Reservoir
Dogs and foxes are the reservoir for Toxocara canis, but puppies and cubs pose the greatest risk of spreading the infection to humans. Infection in most adult dogs is characterized by encysted second-stage larvae. However, these larvae can reactivate in pregnant females and cross the placental barrier to infect the pups. Vertical transmission can also occur through breast milk. Infectious mothers, and puppies under five weeks old, pass eggs in their feces. Approximately 50% of puppies and 20% of adult dogs are infected with T. canis.
Cats are the reservoir for Toxocara cati. As with T. canis, encysted second-stage larvae in pregnant or lactating cats reactivate. However, vertical transmission can only occur through breastfeeding.
Flies can act as mechanical vectors for Toxocara, but most infections occur without a vector. Most incidents with Toxocariasis result from prokaryotic expression vectors and their transmission through direct physical contact with feces that results in the contraction of the illness.
Morphology
Both species produce eggs that are brown and pitted. T. canis eggs measure 75-90 μm and are spherical, whereas the eggs of T. cati are 65-70 μm in diameter and oblong. Second-stage larvae hatch from these eggs and are approximately 0.5mm long and 0.02mm wide. Adults of both species have complete digestive systems and three lips, each composed of a dentigerous ridge.
Adult T. canis are found only within dogs and foxes and the males are 4–6 cm in length, with a curved posterior end. The males each have spicules and one “tubular testis.” Females can be as long as 15 cm, with the vulva stretching one-third of their body length. The females do not curve at the posterior end.
T. cati adult females are approximately 10 cm long, while males are typically 6 cm or less. The T. cati adults only occur within cats, and male T. cati are curved at the posterior end.
Life cycle
Cats, dogs, and foxes can become infected with Toxocara through the ingestion of eggs or by transmission of the larvae from a mother to her offspring. Transmission to cats and dogs can also occur by ingestion of infected accidental hosts, such as earthworms, cockroaches, rodents, rabbits, chickens, or sheep.
Eggs hatch as second-stage larvae in the intestines of the cat, dog, or fox host (for consistency, this article will assume that second-stage larvae emerge from Toxocara eggs, although there is debate as to whether larvae are truly in their second or third stage of development). Larvae enter the bloodstream and migrate to the lungs, where they are coughed up and swallowed. The larvae mature into adults within the small intestine of a cat, dog, or fox, where mating and egg-laying occurs. Eggs are passed in the feces and only become infective after three weeks outside of a host. During this incubation period, molting from first to second (and possibly third) stage larva takes place within the egg. In most adult dogs, cats and foxes, the full lifecycle does not occur, but instead second stage larvae encyst after a period of migration through the body. Reactivation of the larvae is common only in pregnant or lactating cats, dogs and foxes. The full lifecycle usually only occurs in these females and their offspring.
Second-stage larvae will also hatch in the small intestine of an accidental host, such as a human, after ingestion of infective eggs. The larvae will then migrate through the organs and tissues of the accidental host, most commonly the lungs, liver, eyes, and brain. Since L2 larvae cannot mature in accidental hosts, after this period of migration, Toxocara larvae will encyst as second stage larvae.
Diagnosis
Finding Toxocara larvae within a patient is the only definitive diagnosis for toxocariasis; however, biopsies to look for second-stage larvae in humans are generally not very effective. PCR, ELISA, and serological testing are more commonly used to diagnose Toxocara infection. Serological tests are dependent on the number of larvae within the patient, and are unfortunately not very specific. ELISAs are much more reliable and currently have a 78% sensitivity and a 90% specificity. A 2007 study announced an ELISA specific to Toxocara canis, which will minimize false positives from cross reactions with similar roundworms and will help distinguish if a patient is infected with T. canis or T. cati. OLM is often diagnosed after a clinical examination. Granulomas can be found throughout the body and can be visualized using ultrasound, MRI, and CT technologies.
Prevention
Actively involving veterinarians and pet owners is important for controlling the transmission of Toxocara from pets to humans. A group very actively involved in promoting a reduction of infections in dogs in the United States is the Companion Animal Parasite Council -- CAPC. Since pregnant or lactating dogs and cats and their offspring have the highest, active parasitic load, these animals should be placed on a deworming program. Pet feces should be picked up and disposed of or buried, as they may contain Toxocara eggs. Practicing this measure in public areas, such as parks and beaches, is especially essential for decreasing transmission. Up to 20% of soil samples of U.S. playgrounds have found roundworm eggs. Also, sandboxes should be covered when not in use to prevent cats from using them as litter boxes. Hand washing before eating and after playing with pets, as well as after handling dirt will reduce the chances of ingesting Toxocara eggs. Washing all fruits and vegetables, keeping pets out of gardens, and thoroughly cooking meats can also prevent transmission. Finally, teaching children not to place nonfood items, especially dirt, in their mouths will drastically reduce the chances of infection.
Toxocariasis has been named one of the neglected diseases of US poverty, because of its prevalence in Appalachia, the southern U.S., inner city settings, and minority populations.
There is currently no vaccine available or under development.
The mitochondrial genomes of both T. cati and T. canis have been sequenced in 2008, which could lead to breakthroughs in treatment and prevention.
Treatment
Toxocariasis will often resolve itself because the Toxocara larvae cannot mature within human hosts. Corticosteroids are prescribed in severe cases of VLM or if the patient is diagnosed with OLM. Either albendazole (preferred) or mebendazole (“second line therapy”) may be prescribed. Granulomas can be surgically removed, or laser photocoagulation and cryoretinopexy can be used to destroy ocular granulomas.
Visceral toxocariasis in humans can be treated with antiparasitic drugs such as albendazole or mebendazole, tiabendazole or diethylcarbamazine usually in combination with anti-inflammatory medications. Steroids have been utilized with some positive results. Anti-helminthic therapy is reserved for severe infections (lungs, brain) because therapy may induce, due to massive larval killing, a strong inflammatory response. Ocular toxocariasis is more difficult to treat and usually consists of measures to prevent progressive damage to the eye.
Epidemiology
Humans are accidental hosts of Toxocara, yet toxocariasis is seen throughout the world. Most cases of toxocariasis are seen in people under the age of twenty. Seroprevalence is higher in developing countries but can be considerable in first world countries, as well. In Bali, St. Lucia, Nepal, and other countries, seroprevalence is over fifty percent. Previous to 2007, the U.S. seroprevalence was thought to be around 5% in children. However, Won et al. discovered that U.S. seroprevalence is 14% for the population at large. In many countries, toxocariasis is considered very rare. Approximately 10,000 clinical cases are seen a year in the U.S., with ten percent being OLM. Permanent vision loss occurs in 700 of these cases.
Young children are at the greatest risk of infection because they play outside and tend to place contaminated objects and dirt in their mouths. Dog ownership is another known risk factor for transmission. There is also a significant correlation between high Toxocara antibody titers and epilepsy in children.
Parasitic loads as high as 300 larvae in a single gram of liver have been noted in humans. The "excretory-secretory antigens of larvae ... released from their outer epicuticle coat [and] ... readily sloughed off when bound by specific antibodies" incite the host's immune response. The tipping point between the development of VLM and OLM is believed to be between 100 and 200 larvae. The lighter infection in OLM is believed to stimulate a lower immune response and allow for the migration of a larva into the eye. Larvae are thought to enter the eye through the optic nerve, central retinal artery, short posterior ciliary arteries, soft tissues, or cerebrospinal fluid. Ocular granulomas that form around a larva typically are peripheral in the retina or optic disc.
Visceral larva migrans seems to affect children aged 1–4 more often while ocular larva migrans more frequently affects children aged 7–8. Between 4.6% and 23% of US children have been infected with the dog roundworm egg. This number is much higher in other parts of the world, in tropical countries there is seroprevalence of up to 80–90%, such as Colombia, where up to 81% of children have been infected, or Honduras where seroprevalence among school-age children was reported to be 88%. In the western part of the world, seroprevalence is lower, around 35–42%.
History
Werner described a parasitic nematode in dogs in 1782 which he named Ascaris canis. Johnston determined that what Werner had described was actually a member of the genus Toxocara established by Stiles in 1905. Fülleborn speculated that T canis larvae might cause granulomatous nodules in humans. In 1947 Perlingiero and Gyorgy described the first case of what was probably toxocariasis. Their patient was a 2-year-old boy from Florida who had classical symptoms and eosinophilic necrotizing granulomas. In 1950, Campbell-Wilder was the first to describe toxocariasis in humans; she published a paper describing ocular granulomas in patients with endophthalmitis, Coat's disease, or pseudoglioma. Two years later, Beaver et al. published the presence of Toxocara larvae in granulomas removed from patients with symptoms similar to those in Wilder's patients. The dangers of toxocariasis were first raised in Britain in the 1970s, leading to a public health scare.
Other animals
Cats
Some treatments for infection with Toxocara cati include drugs designed to cause the adult worms to become partially anaesthetized and detach from the intestinal lining, allowing them to be excreted live in the feces. Such medications include piperazine and pyrantel. These are frequently combined with the drug praziquantel which appears to cause the worm to lose its resistance to being digested by the host animal. Other effective treatments include ivermectin, milbemycin, and selamectin. Dichlorvos has also been proven to be effective as a poison, though moves to ban it over concerns about its toxicity have made it unavailable in some areas.
Treatment for wild felids, however, is difficult for this parasite, as detection is the best way to find which individuals have the parasite. This can be difficult as infected species are hard to detect. Once detected, the infected individuals would have to be removed from the population, to lower the risk of continual exposure to the parasites. A primary method that has been used to lower the amount of infection is removal through hunting. Removal can also occur through landowners, as Dare and Watkins (2012) discovered through their research on cougars. Both hunters and landowners can provide samples that can be used to detect the presence of feline roundworm in the area, as well as help remove it from the population. This method is more practical than administering medications to wild populations, as wild animals, as mentioned before, are harder to find in order to administer medicinal care.
Medicinal care, however, is also another method used in roundworm studies; such as the experiment on managing raccoon roundworm done by Smyser et al. (2013) in which they implemented medical baiting. However, medicine is often expensive and the success of the baiting depends on whether the infected individuals consume the bait. Additionally, it can be costly (in time and resources) to check on baited areas. Removal by hunting allows agencies to reduce costs and gives agencies a more improved chance of removing infected individuals.
| Biology and health sciences | Helminthic diseases and infestations | Health |
884908 | https://en.wikipedia.org/wiki/Calabash | Calabash | Calabash (; Lagenaria siceraria), also known as bottle gourd, white-flowered gourd, long melon, birdhouse gourd, New Guinea bean, New Guinea butter bean, Tasmania bean, and opo squash, is a vine grown for its fruit. It can be either harvested young to be consumed as a vegetable, or harvested mature to be dried and used as a utensil, container, or a musical instrument. When it is fresh, the fruit has a light green smooth skin and white flesh.
Calabash fruits have a variety of shapes: they can be huge and rounded, small and bottle-shaped, or slim and serpentine, and they can grow to be over a metre long. Rounder varieties are typically called calabash gourds. The gourd was one of the world's first cultivated plants grown not primarily for food, but for use as containers. The bottle gourd may have been carried from Asia to Africa, Europe, and the Americas in the course of human migration, or by seeds floating across the oceans inside the gourd. It has been proven to have been globally domesticated (and existed in the New World) during the Pre-Columbian era.
There is sometimes confusion when discussing "calabash" because the name is shared with the unrelated calabash tree (Crescentia cujete), whose hard, hollow fruits are also used to make utensils, containers, and musical instruments.
Etymology
The English word calabash is loaned from , which in turn derived from meaning gourd or pumpkin. The Spanish word is of pre-Roman origin. It comes from the , from -cal which means house or shell. It is a doublet of carapace and galapago. The English word is cognate with ("pumpkin; orange colour"), ("gourd, pumpkin, squash; calabash (container)"), , , , ("gourd; calabash (container)") and (and ).
History
The bottle gourd has been recovered from archaeological contexts in China and Japan dating to c. 8,000–9,000 BP, whereas in Africa, despite decades of high-quality archaeobotanical research, the earliest record of its occurrence remains the 1884 report of a bottle gourd being recovered from a 12th Dynasty tomb at Thebes dating to ca. 4,000 BP. When considered together, the genetic and archaeological information points toward L. siceraria being independently brought under domestication first in Asia, and more than 4,000 years later, in Africa.
The bottle gourd is a commonly cultivated plant in tropical and subtropical areas of the world, and was eventually domesticated in southern Africa. Stands of L. siceraria, which may be source plants and not merely domesticated stands, were reported in Zimbabwe in 2004. This apparent wild plant produces thinner-walled fruit that, when dried, would not endure the rigors of use on long journeys as a water container. Today's gourd may owe its tough, waterproof wall to selection pressures over its long history of domestication.
Gourds were cultivated in Africa, Asia, Europe, and the Americas for thousands of years before Columbus' arrival to the Americas. Polynesian specimens of calabash were found to have genetic markers suggesting hybridization from Asian and American cultivars. In Europe, Walahfrid Strabo (808–849), abbot and poet from Reichenau and advisor to the Carolingian kings, discussed the gourd in his Hortulus as one of the 23 plants of an ideal garden.
The mystery of the bottle gourd – namely that this African or Eurasian species was being grown in the Americas over 8,000 years ago – comes from the difficulty in understanding how it arrived in the Americas. The bottle gourd was theorized to have drifted across the Atlantic Ocean from Africa to South America, but in 2005 a group of researchers suggested that it may have been domesticated earlier than food crops and livestock and, like dogs, was brought into the New World at the end of the ice age by the native hunter-gatherer Paleo-Indians, which they based on a study of the genetics of archaeological samples. This study purportedly showed that gourds in American archaeological finds were more closely related to Asian variants than to African ones.
In 2014 this theory was repudiated based on a more thorough genetic study. Researchers more completely examined the plastid genomes of a broad sample of bottle gourds, and concluded that North and South American specimens were most closely related to wild African variants and could have drifted over the ocean several or many times, as long as 10,000 years ago.
Cultivation
Bottle gourds are grown by direct sowing of seeds or transplanting 15- to 20-day-old seedlings. The plant prefers well-drained, moist, organic rich soil. It requires plenty of moisture in the growing season and a warm, sunny position, sheltered from the wind. It can be cultivated in small places such as in a pot, and allowed to spread on a trellis or roof. In rural areas, many houses with thatched roofs are covered with the gourd vines. Bottle gourds grow very rapidly and their stems can reach a length of 9 m in the summer, so they need a solid support along the stem if they are to climb a pole or trellis. If planted under a tall tree, the vine may grow up to the top of the tree. To obtain more fruit, farmers sometimes cut off the tip of the vine when it has grown to 6–8 feet in length. This forces the plant to produce side branches that will bear flowers and yield more fruit.
The plant produces night blooming white flowers. The male flowers have long peduncles and the females have short ones with an ovary in the shape of the fruit. Sometimes the female flowers drop off without growing into a gourd due to the failure of pollination if there is no night pollinator (probably a kind of moth) in the garden. Hand pollination can be used to solve the problem. Pollens are around 60 microns in length.
First crop is ready for harvest within two months; first flowers open in about 45 days from sowing. Each plant can yield 1 fruit per day for the next 45 days if enough nutrients are available.
Yield ranges from 35 to 40 tons/ha, per season of 3 months cycle.
Toxicity
Like other members of the family Cucurbitaceae, gourds contain cucurbitacins that are known to be cytotoxic at a high concentration. The tetracyclic triterpenoid cucurbitacins present in fruits and vegetables of the cucumber family are responsible for the bitter taste, and could cause stomach ulcers. In extreme cases, people have died from drinking the juice of gourds.
The toxic cases are usually due to the gourd being used to make juice, which the drinkers described as being unusually bitter. In three of the lethal cases, the victims were diabetics in their 50s and 60s. In 2018, a healthy woman in her 40s was hospitalized for severe reactions after consuming the juice and died three days later from complications.
The plant is not normally toxic when eaten. The excessively bitter (and toxic) gourds are due to improper storage (temperature swings or high temperature) and over-ripening.
Nutrition
Boiled calabash is 95% water, 4% carbohydrates, 1% protein, and contains negligible fat (table). In a reference amount of , cooked calabash supplies a moderate amount of vitamin C (10% of the Daily Value), with no other micronutrients in significant amounts (table).
Culinary uses
Central America
In Central America the seeds of the bottle gourd are toasted and ground with other ingredients (including rice, cinnamon, and allspice) to make one type of the drink horchata.
East Asia
China
The calabash is frequently used in southern Chinese cuisine in either a stir-fry dish or a soup.
Japan
In Japan, it is commonly sold in the form of dried, marinated strips known as kanpyō and is used as an ingredient for making makizushi (rolled sushi).
Korea
Traditionally in Korea, the inner flesh has been eaten as namul vegetable and the outside cut in half to make bowls. Both fresh and dried flesh of bak is used in Korean cuisine. Fresh calabash flesh, scraped out, seeded, salted and squeezed to draw out moisture, is called baksok. Scraped and sun-dried calabash flesh, called bak-goji, is usually soaked before being stir-fried. Soaked bak-goji is often simmered in sauce or stir-fried before being added to japchae and gimbap. Sometimes uncooked raw baksok is seasoned to make saengchae.
Southeast Asia
Burma
In Burma, it is a popular fruit. The young leaves are also boiled and eaten with a spicy, fermented fish sauce. It can also be cut up, coated in batter and deep fried to make fritters, which are eaten with Burmese mohinga.
Philippines
In the Philippines, calabash (known locally as ) is commonly cooked in soup dishes like tinola. They are also common ingredients in noodle (pancit) dishes.
Vietnam
In Vietnam, it is a very popular vegetable, commonly cooked in soup with shrimp, meatballs, clams, various fish like freshwater catfish or snakehead fish or crab. It is also commonly stir-fried with meat or seafood, or incorporated as an ingredient of a hotpot. It is also used as a medicine. Americans have called calabashes from Vietnam "opo squash".
The shoots, tendrils, and leaves of the plant may also be eaten as greens.
South Asia
India
A popular north Indian dish is lauki chana, (chana dal and diced gourd in a semi-dry gravy). In the state of Maharashtra in India, a similar preparation called dudhi chana is popular. The skin of the vegetable is used in making a dry spicy chutney preparation. It is consumed in Assam with fish curry, as boiled vegetable curry and also fried with potato and tomatoes. Lauki kheer (grated bottle gourd, sugar and milk preparation) is a dessert from Telangana, usually prepared for festive occasions. In Andhra Pradesh it is called sorakaya and is used to make sorakaya pulusu (with tamarind juice), sorakaya palakura (curry with milk and spices) and sorakaya pappu (with lentils). Lau chingri, a dish prepared with bottle gourd and prawn, is popular in West Bengal. The edible leaves and young stems of the plant are widely used in Bengali cuisine. Although popularly called lauki in Hindi in northern part of the country, it is also called kaddu in certain parts of country like eastern India. (However, "kaddu" popularly translates to "pumpkin" in northern India.) It can be consumed as a dish with rice or roti for its medicinal benefits. In Gujarat, a traditional Gujarati savoury cake called handvo is made primarily using bottle gourd (in Gujarati, dudhi), sesame seeds, flour, and often lentils. In Karnataka, bottle gourd is called Sorekayi and is used to prepare palya (stir-fry) and Sambaru (a south Indian stew). Also, crispy sorekayi dosé (dosa) is one of the popular breakfasts in Karnataka.
Bangladesh
In Bangladesh the fruit is served with rice as a common dish.
Nepal
In Nepal, in the Madheshi southern plains, preparations other than as a normal vegetable include halva and khichdi.
Pakistan
In Pakistan, the calabash is cultivated on a large scale as its fruit are a popular vegetable.
Sri Lanka
In Sri Lanka, it is used in combination with rice to make a variety of milk rice, which is a popular dish in Sri Lanka. Different types of curries are also made using this, specially white curries with coconut milk.
Europe
Italy
In Southern Italy and Sicily, the variety Lagenaria siceraria var. longissima, called zucca da vino, zucca bottiglia, or cucuzza, is grown and used in soup or along with pasta.
In Sicily, mostly in the Palermo area, a traditional soup called "Minestra di Tenerumi" is made with the tender leaves of var. Longissima along with peeled tomato and garlic. The young leaves are themselves called "tenerumi", and Lagenaria in Sicily is cultivated both professionally and in home orchards mostly to use the leaves as a vegetable, the fruit being treated almost as a secondary product.
It is also grown by the Italian diaspora.
Cultural uses
Africa
Hollowed-out and dried calabashes are a very typical utensil in households across West Africa. They are used to clean rice, carry water, and as food containers. Smaller sizes are used as bowls to drink palm wine. Calabashes are used in making the West African instruments like the Ṣẹ̀kẹ̀rẹ̀, a Yoruba instrument similar to a maraca, kora (a harp-lute), xalam/ngoni (a lute) and the goje (a traditional fiddle). They also serve as resonators underneath the balafon (West African marimba). The calabash is also used in making the shegureh (a Sierra Leonean women's rattle) and balangi (a Sierra Leonean type of balafon) musical instruments. Sometimes large calabashes are simply hollowed, dried and used as percussion instruments by striking them, especially by Fulani, Songhai, Gur-speaking and Hausa peoples. In Nigeria the calabash has been used by some motorcyclists as an imitation helmet in an attempt to circumvent motorcycle helmet laws. In South Africa it is commonly used as a drinking vessel and a vessel for carrying food by communities, such as the Bapedi and AmaZulu. Erbore children of Ethiopia wear hats made from the calabash to protect them from the sun. South Africa's FNB Stadium, which hosted the 2010 FIFA World Cup, is known as The Calabash as its shape takes inspiration from the calabash. The calabash is also used in the manufacture of puppets.
Calabash also has a large cultural significance. In many African legends, Calabash (commonly referred to as gourds) are presented as a vessel for knowledge and wisdom.
China
The húlu (葫芦/葫蘆), as the calabash is called in Mandarin Chinese, is an ancient symbol for health. Hulu had fabled healing properties due to doctors in former times carrying medicine inside it. The hulu was believed to absorb negative, earth-based qi (energy) that would otherwise affect health, and is a traditional Chinese medicine cure. The bottle gourd is a symbol of the Eight Immortals, and particularly Li Tieguai, who is associated with medicine. Li Tieguai's gourd was said to carry medicine that could cure any illness and never emptied, which he dispensed to the poor and needy. Some folk myths say the "gourd had spirals of smoke ascend from it, denoting his power of setting his spirit free from his body," and that it "served as a bedroom for the night..." The gourd is also an attribute of the deity Shouxing and a symbol of longevity.
Dried calabash were also used as containers for liquids, often liquors or medicines. Calabash gourds were also grown in earthen molds to form different shapes with imprinted floral or arabesque designs. Molded gourds were also dried to house pet crickets. The texture of the gourd lends itself nicely to the sound of the insect, much like a musical instrument. The musical instrument, hulusi, is a kind of flute made from the gourd.
Jewish culture
In the Safaradi Jewish culture, the gourd is eaten during Rosh Hashana (Jewish New Year's Eve). According to the texts the gourd is eaten as a symbol of tearing apart the enemies who may come and attack. It is called Qaraa, which in Hebrew means "torn" קרע.
"שיקרעו אויבנו מעלינו" meaning "may our enemies be torn apart over from us".
Polynesia
The plant is spread throughout Polynesia known by hue in many related languages.
In Hawaii the word "calabash" refers to a large serving bowl, usually made from hardwood rather than from the calabash gourd, which is used on a buffet table or in the middle of the dining table. The use of the calabash in Hawaii has led to terms like "calabash family" or "calabash cousins", indicating an extended family grown up around shared meals and close friendships. This gourd is often dried when ripe and used as a percussion instrument called an ipu heke (double gourd drum) or just Ipu in contemporary and ancient hula.
The Māori people of New Zealand grew several cultivars of calabash for particular uses like ipu kai cultivars as food containers and tahā wai cultivars as water gourds. They believed the gourd as a representation of Pū-tē-hue, one of Tāne (their god of forests)'s offspring. Several types of taonga pūoro (musical instruments) are made from gourds, including types of flute (ororuarangi, kōauau ponga ihu) and shakers (hue rarā, hue puruwai).
India
The calabash is used as a resonator in many string instruments in India. Instruments that look like guitars are made of wood, but can have a calabash resonator at the end of the strings table, called toomba. The sitar, the surbahar, the tanpura (north of India, tambura south of India), may have a toomba. In some cases, the toomba may not be functional, but if the instrument is large, it is retained because of its balance function, which is the case of the Saraswati veena. Other instruments like rudra veena and vichitra veena have two large calabash resonators at both ends of the strings table. The instrument, Gopichand used by the Baul singers of Bengal is made out of calabash. The practice is also common among Buddhist and Jain sages.
These toombas are made of dried calabash gourds, using special cultivars that were originally imported from Africa and Madagascar. They are mostly grown in Bengal and near Miraj, Maharashtra. These gourds are valuable items and they are carefully tended; for example, they are sometimes given injections to stop worms and insects from making holes in them while they are drying.
Hindu ascetics (sadhu) traditionally use a dried gourd vessel called the kamandalu. The juice of a bottle gourd is considered to have medicinal properties and be very healthy (see juice toxicity above).
In parts of India a dried, unpunctured gourd is used as a float (called surai-kuduvai in Tamil) to help people learn to swim in rural areas.
Philippines
In the Philippines, dried calabash gourds are one common material for making a traditional salakot hat.
In 2012, Teófilo García of Abra in Luzon, an expert artisan who makes the Ilocano tamburaw variant using calabash, was awarded by the National Commission for Culture and the Arts with the "Gawad sa Manlilikha ng Bayan" (National Living Treasures Award). He was cited for his dedication to practising and teaching the craft as an intangible cultural heritage of the Philippines under the Traditional Craftsmanship category.
New Guinea
Among some New Guinea highland tribes, the calabash is used by men as a penis sheath.
South America
In Argentina, Uruguay, Paraguay, Chile and southern Brazil, calabash gourds are dried and carved into mates (from the Quichua word mathi, adopted into the Spanish language), the traditional container for mate, the caffeinated, tea-like drink brewed from the yerba mate plant. In the region the beverage itself is called mate as well as the calabash from which the drinking vessels are made. In Peru it is used in a popular practice for the making of mate burilado; "burilado" is the technique adopted for decorating the mate calabashes.
In Peru, Bolivia and Ecuador calabash gourds are used for medicinal purposes. The Inca culture applied symbols from folklore to gourds, this practice is still familiar and valued.
North America
Calabash's watertight features allowed it to be often used as container to ship seeds across the translantic slave trade. They were also used by enslaved people to carry seeds for planting on plantation fields. On plantations that held enslaved African Americans, the Calabash symbolized freedom—as alluded to in the song "Follow the Drinking Gourd" that referenced the Big Dipper constellation that was used to guide the Underground Railroad.
Other uses
Tobacco smoking pipe
The gourd can be dried and used to smoke pipe tobacco. According to American consular reports from the early 20th century calabash pipes were commonly used in South Africa. Calabash was said to bestow a "special softness" of flavor that could not be duplicated by other materials. The lining was made of meerschaum, though tin was used for low-grade models. A typical design yielded by this squash is recognized (theatrically) as the pipe of Sherlock Holmes, but the inventor of this character, Sir Arthur Conan Doyle, never mentioned Holmes using a calabash pipe. It was the preferred pipe for stage actors portraying Holmes, because they could balance this pipe better than other styles while delivering their lines.
Enema equipment
The gourd is used traditionally to administer enemas. Along the upper Congo River an enema apparatus is made by making a hole in one end of the gourd for filling it, and using a resin to attach a hollow cane to the gourd's neck.
| Biology and health sciences | Botanical fruits used as culinary vegetables | Plants |
885374 | https://en.wikipedia.org/wiki/Vasco%20da%20Gama%20Bridge | Vasco da Gama Bridge | The Vasco da Gama Bridge () is a cable-stayed bridge flanked by viaducts that spans the Tagus River in Parque das Nações in Lisbon, the capital of Portugal.
It is the second longest bridge in Europe, after the Crimean Bridge, and the longest one in the European Union. It was built to alleviate the congestion on Lisbon's 25 de Abril Bridge, and eliminate the need for traffic between the country's northern and southern regions to pass through the capital city.
Construction began in February 1995; the bridge was opened to traffic on 29 March 1998, just in time for Expo 98, the World's Fair that celebrated the 500th anniversary of the discovery by Vasco da Gama of the sea route from Europe to India.
Along with the 25 de Abril Bridge, the Vasco da Gama is one of two bridges that span the Tagus River in Lisbon.
Description
The bridge carries six road lanes, with a speed limit of , the same as that on motorways, except on one section which is limited to . On windy, rainy, and foggy days, the speed limit is reduced to . The number of road lanes will be enlarged to eight when traffic reaches a daily average of 52,000.
Bridge and access road sections
North access roads:
North viaduct:
Expo viaduct: ; 12 sections
Main bridge: main span: ; side spans: each (total length: ); cement pillars: -high; free height for navigation in high tides: ;
Central viaduct: ; 80 pre-fabricated sections -long; 81 pillars up to -deep; height from to
South viaduct: ; sections; 84 sections; 85 pillars
South access roads: ; includes the toll plaza (18 gates) and two service areas
Construction and cost
The $1.1 billion project was split into four parts, each built by a different company, and supervised by an independent consortium. There were up to 3,300 workers simultaneously on the project, which took 18 months of preparation and 18 months of construction. The financing is via a build-operate-transfer system by Lusoponte, a private consortium that receives the first 40 years of tolls for both Lisbon bridges. Lusoponte's capital is 50.4% from Portuguese companies, 24.8% from French, and 24.8% from British.
The bridge has a life expectancy of 120 years, having been designed to withstand wind speeds of and hold up to an earthquake 4.5 times greater than the standards of building resistance in Lisbon. The deepest foundation piles, up to in diameter, were driven down to under mean sea level. Environmental pressure throughout the project resulted in the left-bank viaducts being extended inland to preserve the marshes underneath, as well as the lamp posts throughout the bridge being tilted inwards so as not to cast light on the river below.
Toll
Northbound traffic (to Lisbon) is charged a toll while travelling southbound is free. Tolls are collected through a toll plaza located on the south bank of Tagus, near Montijo. As of 2024, bridge tolls range from €3.20 (passenger cars) to €13.55 (trucks).
| Technology | Bridges | null |
885375 | https://en.wikipedia.org/wiki/Lost-wax%20casting | Lost-wax casting | Lost-wax castingalso called investment casting, precision casting, or cire perdue (; borrowed from French)is the process by which a duplicate sculpture (often a metal, such as silver, gold, brass, or bronze) is cast from an original sculpture. Intricate works can be achieved by this method.
The oldest known examples of this technique are approximately 6,500 years old (4550–4450 BC) and attributed to gold artefacts found at Bulgaria's Varna Necropolis. A copper amulet from Mehrgarh, Indus Valley civilization, in Pakistan, is dated to circa 4,000 BC. Cast copper objects, found in the Nahal Mishmar hoard in southern Israel, which belong to the Chalcolithic period (4500–3500 BC), are estimated, from carbon-14 dating, to date to circa 3500 BC. Other examples from somewhat later periods are from Mesopotamia in the third millennium BC. Lost-wax casting was widespread in Europe until the 18th century, when a piece-moulding process came to predominate.
The steps used in casting small bronze sculptures are fairly standardized, though the process today varies from foundry to foundry (in modern industrial use, the process is called investment casting). Variations of the process include: "lost mould", which recognizes that materials other than wax can be used (such as tallow, resin, tar, and textile); and "waste wax process" (or "waste mould casting"), because the mould is destroyed to remove the cast item.
Process
Casts can be made of the wax model itself, the direct method, or of a wax copy of a model that need not be of wax, the indirect method. These are the steps for the indirect process (the direct method starts at step 7):
Model-making. An artist or mould-maker creates an original model from wax, clay, or another material. Wax and oil-based clay are often preferred because these materials retain their softness.
Mouldmaking. A mould is made of the original model or sculpture. The rigid outer moulds contain the softer inner mould, which is the exact negative of the original model. Inner moulds are usually made of latex, polyurethane rubber or silicone, which is supported by the outer mould. The outer mould can be made from plaster, but can also be made of fiberglass or other materials. Most moulds are made of at least two pieces, and a shim with keys is placed between the parts during construction so that the mould can be put back together accurately. If there are long, thin pieces extending out of the model, they are often cut off of the original and moulded separately. Sometimes many moulds are needed to recreate the original model, especially for large models.
Wax. Once the mould is finished, molten wax is poured into it and swished around until an even coating, usually about 3 mm ( inch) thick, covers the inner surface of the mould. This is repeated until the desired thickness is reached. Another method is to fill the entire mould with molten wax and let it cool until a desired thickness has set on the surface of the mould. After this the rest of the wax is poured out again, the mould is turned upside down and the wax layer is left to cool and harden. With this method it is more difficult to control the overall thickness of the wax layer.
Removal of wax. This hollow wax copy of the original model is removed from the mould. The model-maker may reuse the mould to make multiple copies, limited only by the durability of the mould.
Chasing. Each hollow wax copy is then "chased": a heated metal tool is used to rub out the marks that show the parting line or flashing where the pieces of the mould came together. The wax is dressed to hide any imperfections. The wax now looks like the finished piece. Wax pieces that were moulded separately can now be heated and attached; foundries often use registration marks to indicate exactly where they go.
Spruing. The wax copy is sprued with a treelike structure of wax that will eventually provide paths for the molten casting material to flow and for air to escape. The carefully planned spruing usually begins at the top with a wax "cup," which is attached by wax cylinders to various points on the wax copy. The spruing does not have to be hollow, as it will be melted out later in the process.
Slurry. A sprued wax copy is dipped into a slurry of silica, then into a sand-like stucco, or dry crystalline silica of a controlled grain size. The slurry and grit combination is called ceramic shell mould material, although it is not literally made of ceramic. This shell is allowed to dry, and the process is repeated until at least a half-inch coating covers the entire piece. The bigger the piece, the thicker the shell needs to be. Only the inside of the cup is not coated, and the cup's flat top serves as the base upon which the piece stands during this process. The core is also filled with fire-proof material.
Burnout. The ceramic shell-coated piece is placed cup-down in a kiln, whose heat hardens the silica coatings into a shell, and the wax melts and runs out. The melted wax can be recovered and reused, although it is often simply burned up. Now all that remains of the original artwork is the negative space formerly occupied by the wax, inside the hardened ceramic shell. The feeder, vent tubes and cup are also now hollow.
Testing. The ceramic shell is allowed to cool, then is tested to see if water will flow freely through the feeder and vent tubes. Cracks or leaks can be patched with thick refractory paste. To test the thickness, holes can be drilled into the shell, then patched.
Pouring. The shell is reheated in the kiln to harden the patches and remove all traces of moisture, then placed cup-upward into a tub filled with sand. Metal is melted in a crucible in a furnace, then poured carefully into the shell. The shell has to be hot because otherwise the temperature difference would shatter it. The filled shells are then allowed to cool.
Release. The shell is hammered or sand-blasted away, releasing the rough casting. The sprues, which are also faithfully recreated in metal, are cut off, the material to be reused in another casting.
Metal-chasing. Just as the wax copies were chased, the casting is worked until the telltale signs of the casting process are removed, so that the casting now looks like the original model. Pits left by air bubbles in the casting and the stubs of the spruing are filed down and polished.
Prior to silica-based casting moulds, these moulds were made of a variety of other fire-proof materials, the most common being plaster based, with added grout, and clay based. Prior to rubber moulds gelatine was used.
Jewellery and small parts
The methods used for small parts and jewellery vary somewhat from those used for sculpture. A wax model is obtained either from injection into a rubber mould or by being custom-made by carving. The wax or waxes are sprued and fused onto a rubber base, called a "sprue base". Then a metal flask, which resembles a short length of steel pipe that ranges roughly from 3.5 to 15 centimeters tall and wide, is put over the sprue base and the waxes. Most sprue bases have a circular rim which grips the standard-sized flask, holding it in place. Investment (refractory plaster) is mixed and poured into the flask, filling it. It hardens, then is burned out as outlined above. Casting is usually done straight from the kiln either by centrifugal casting or vacuum casting.
The lost-wax process can be used with any material that can burn, melt, or evaporate to leave a mould cavity. Some automobile manufacturers use a lost-foam technique to make engine blocks. The model is made of polystyrene foam, which is placed into a casting flask, consisting of a cope and drag, which is then filled with casting sand. The foam supports the sand, allowing shapes that would be impossible if the process had to rely on the sand alone. The metal is poured in, vaporizing the foam with its heat.
In dentistry, gold crowns, inlays and onlays are made by the lost-wax technique. Application of Lost Wax technique for the fabrication of cast inlay was first reported by Taggart. A typical gold alloy is about 60% gold and 28% silver with copper and other metals making up the rest. Careful attention to tooth preparation, impression taking and laboratory technique are required to make this type of restoration a success. Dental laboratories make other items this way as well.
Textiles
In this process, the wax and the textile are both replaced by the metal during the casting process, whereby the fabric reinforcement allows for a thinner model, and thus reduces the amount of metal expended in the mould. Evidence of this process is seen by the textile relief on the reverse side of objects and is sometimes referred to as "lost-wax, lost textile". This textile relief is visible on gold ornaments from burial mounds in southern Siberia of the ancient horse riding tribes, such as the distinctive group of openwork gold plaques housed in the Hermitage Museum, Saint Petersburg. The technique may have its origins in the Far East, as indicated by the few Han examples, and the bronze buckle and gold plaques found at the cemetery at Xigou. Such a technique may also have been used to manufacture some Viking Age oval brooches, indicated by numerous examples with fabric imprints such as those of Castletown (Scotland).
Glass sculptures
The lost-wax casting process may also be used in the production of cast glass sculptures. The original sculpture is made from wax. The sculpture is then covered with mold material (e.g., plaster), except for the bottom of the mold which must remain open. When the mold has hardened, the encased sculpture is removed by applying heat to the bottom of the mold. This melts out the wax (the wax is 'lost') and destroys the original sculpture. The mold is then placed in a kiln upside down with a funnel-like cup on top that holds small chunks of glass. When the kiln is brought up to temperature (1450-1530 degrees Fahrenheit), the glass chunks melt and flow down into the mold. Annealing time is usually 3–5 days, and total kiln time is 5 or more days. After the mold is removed from the kiln, the mold material is removed to reveal the sculpture inside.
Archaeological history
Black Sea
Cast gold knucklebones, beads, and bracelets, found in graves at Bulgaria's Varna Necropolis, have been dated to approximately 6500 years BP. They are believed to be both some of the oldest known manufactured golden objects, and the oldest objects known to have been made using lost wax casting.
Middle East
Some of the oldest known examples of the lost-wax technique are the objects discovered in the Nahal Mishmar hoard in southern Land of Israel, and which belong to the Chalcolithic period (4500–3500 BC). Conservative Carbon-14 estimates date the items to around 3700 BC, making them more than 5700 years old.
Near East
In Mesopotamia, from –2750 BC, the lost-wax technique was used for small-scale, and then later large-scale copper and bronze statues. One of the earliest surviving lost-wax castings is a small lion pendant from Uruk IV. Sumerian metalworkers were practicing lost-wax casting from approximately –3200 BC. Much later examples from northeastern Mesopotamia/Anatolia include the Great Tumulus at Gordion (late 8th century BC), as well as other types of Urartian cauldron attachments.
South Asia
The oldest known example of applying the lost-wax technique to copper casting comes from a 6,000-year-old () copper, wheel-shaped amulet found at Mehrgarh, Pakistan.
Metal casting, by the Indus Valley civilization, produced some of the earliest known examples of lost-wax casting applied to the casting of copper alloys, a bronze figurine, found at Mohenjo-daro, and named the "dancing girl", is dated to 2300-1750 . Other examples include the buffalo, bull and dog found at Mohenjodaro and Harappa, two copper figures found at the Harappan site Lothal in the district of Ahmedabad of Gujarat, and likely a covered cart with wheels missing and a complete cart with a driver found at Chanhudaro.
During the post-Harappan period, hoards of copper and bronze implements made by the lost-wax process are known from Tamil Nadu, Uttar Pradesh, Bihar, Madhya Pradesh, Odisha, Andhra Pradesh and West Bengal. Gold and copper ornaments, apparently Hellenistic in style, made by cire perdue were found at the ruins at Sirkap. One example of this Indo-Greek art dates to the the juvenile figure of Harpocrates excavated at Taxila. Bronze icons were produced during the 3rd and 4th centuries, such as the Buddha image at Amaravati, and the images of Rama and Kartikeya in the Guntur district of Andhra Pradesh. A further two bronze images of Parsvanatha and a small hollow-cast bull came from Sahribahlol, Gandhara, and a standing Tirthankara () from Chausa in Bihar should be mentioned here as well. Other notable bronze figures and images have been found in Rupar, Mathura (in Uttar Pradesh) and Brahmapura, Maharashtra.
Gupta and post-Gupta period bronze figures have been recovered from the following sites: Saranath, Mirpur-Khas (in Pakistan), Sirpur (District of Raipur), Balaighat (near Mahasthan now in Bangladesh), Akota (near Vadodara, Gujarat), Vasantagadh, Chhatarhi, Barmer and Chambi (in Rajesthan). The bronze casting technique and making of bronze images of traditional icons reached a high stage of development in South India during the medieval period. Although bronze images were modelled and cast during the Pallava Period in the eighth and ninth centuries, some of the most beautiful and exquisite statues were produced during the Chola Period in Tamil Nadu from the tenth to the twelfth century. The technique and art of fashioning bronze images is still skillfully practised in South India, particularly in Kumbakonam. The distinguished patron during the tenth century was the widowed Chola queen, Sembiyan Maha Devi. Chola bronzes are the most soughtafter collectors’ items by art lovers all over the world. The technique was used throughout India, as well as in the neighbouring countries Nepal, Tibet, Ceylon, Burma and Siam.
Southeast Asia
The inhabitants of Ban Na Di were casting bronze from to 200 AD, using the lost-wax technique to manufacture bangles. Bangles made by the lost-wax process are characteristic of northeast Thailand. Some of the bangles from Ban Na Di revealed a dark grey substance between the central clay core and the metal, which on analysis was identified as an unrefined form of insect wax. It is likely that decorative items, like bracelets and rings, were made by cire perdue at Non Nok Tha and Ban Chiang. There are technological and material parallels between northeast Thailand and Vietnam concerning the lost-wax technique. The sites exhibiting artifacts made by the lost-mould process in Vietnam, such as the Dong Son drums, come from the Dong Son, and Phung Nguyen cultures, such as one sickle and the figure of a seated individual from Go Mun (near Phung Nguyen, the Bac Bo Region), dating to the Go Mun phase (end of the General B period, up until the 7th century BC).
West Africa
Cast bronzes are known to have been produced in Africa by the 9th century AD in Igboland (Igbo-Ukwu) in Nigeria, the 12th century AD in Yorubaland (Ife) and the 15th century AD in the kingdom of Benin. Some portrait heads remain.
Benin mastered bronze during the 16th century, produced portraiture and reliefs in the metal using the lost wax process.
Egypt
The Egyptians were practicing cire perdue from the mid 3rd millennium BC, shown by Early Dynastic bracelets and gold jewellery. Inserted spouts for ewers (copper water vessels) from the Fourth Dynasty (Old Kingdom) were made by the lost-wax method. Hollow castings, such as the Louvre statuette from the Fayum find appeared during the Middle Kingdom, followed by solid cast statuettes (like the squatting, nursing mother, in Brooklyn) of the Second Intermediate/Early New Kingdom. The hollow casting of statues is represented in the New Kingdom by the kneeling statue of Tuthmosis IV (British Museum, London) and the head fragment of Ramesses V (Fitzwilliam Museum, Cambridge). Hollow castings become more detailed and continue into the Eighteenth Dynasty, shown by the black bronze kneeling figure of Tutankhamun (Museum of the University of Pennsylvania). Cire Perdue is used in mass-production during the Late Period to Graeco-Roman times when figures of deities were cast for personal devotion and votive temple offerings. Nude female-shaped handles on bronze mirrors were cast by the lost-wax process.
Mediterranean
The lost-wax technique came to be known in the Mediterranean during the Bronze Age. It was a major metalworking technique utilized in the ancient Mediterranean world, notably during the Classical period of Greece for large-scale bronze statuary and in the Roman world.
Direct imitations and local derivations of Oriental, Syro-Palestinian and Cypriot figurines are found in Late Bronze Age Sardinia, with a local production of figurines from the 11th to 10th century BC. The cremation graves (mainly 8th-7th centuries BC, but continuing until the beginning of the 4th century) from the necropolis of Paularo (Italian Oriental Alps) contained fibulae, pendants and other copper-based objects that were made by the lost-wax process. Etruscan examples, such as the bronze anthropomorphic handle from the Bocchi collection (National Archaeological Museum of Adria), dating back to the 6th to 5th centuries BC, were made by cire perdue. Most of the handles in the Bocchi collection, as well as some bronze vessels found in Adria (Rovigo, Italy) were made using the lost-wax technique. The better known lost-wax produced items from the classical world include the "Praying Boy" (in the Berlin Museum), the statue of Hera from Vulci (Etruria), which, like most statues, was cast in several parts which were then joined. Geometric bronzes such as the four copper horses of San Marco (Venice, probably 2nd century) are other prime examples of statues cast in many parts.
Examples of works made using the lost-wax casting process in Ancient Greece largely are unavailable due to the common practice in later periods of melting down pieces to reuse their materials. Much of the evidence for these products come from shipwrecks. As underwater archaeology became feasible, artifacts lost to the sea became more accessible. Statues like the Artemision Bronze Zeus or Poseidon (found near Cape Artemision), as well as the Victorious Youth (found near Fano), are two such examples of Greek lost-wax bronze statuary that were discovered underwater.
Some Late Bronze Age sites in Cyprus have produced cast bronze figures of humans and animals. One example is the male figure found at Enkomi. Three objects from Cyprus (held in the Metropolitan Museum of Art in New York) were cast by the lost-wax technique from the 13th and 12th centuries BC, namely, the amphorae rim, the rod tripod, and the cast tripod.
Other, earlier examples that show this assembly of lost-wax cast pieces include the bronze head of the Chatsworth Apollo and the bronze head of Aphrodite from Satala (Turkey) from the British Museum.
East Asia
There is great variability in the use of the lost-wax method in East Asia. The casting method to make bronzes till the early phase of Eastern Zhou (770-256 ) was almost invariably section-mold process. Starting from around 600 , there was an unmistakable rise of lost-wax casting in the central plains of China, first witnessed in the Chu cultural sphere.
Further investigations have revealed this not to be the case as it is clear that the piece-mould casting method was the principal technique used to manufacture bronze vessels in China. The lost-wax technique did not appear in northern China until the 6th century BC. Lost-wax casting is known as rōgata in Japanese, and dates back to the Yayoi period, . The most famous piece made by cire perdue is the bronze image of Buddha in the temple of the Todaiji monastery at Nara. It was made in sections between 743 and 749, allegedly using seven tons of wax.
Northern Europe
The Dunaverney (1050–910 BC) and Little Thetford (1000–701 BC) flesh-hooks have been shown to be made using a lost-wax process. The Little Thetford flesh-hook, in particular, employed distinctly inventive construction methods. The intricate Gloucester Candlestick (1104–1113 AD) was made as a single-piece wax model, then given a complex system of gates and vents before being invested in a mould.
Americas
The lost-wax casting tradition was developed by the peoples of Nicaragua, Costa Rica, Panama, Colombia, northwest Venezuela, Andean America, and the western portion of South America. Lost-wax casting produced some of the region's typical gold wire and delicate wire ornament, such as fine ear ornaments. The process was employed in prehispanic times in Colombia's Muisca and Sinú cultural areas. Two lost-wax moulds, one complete and one partially broken, were found in a shaft and chamber tomb in the vereda of Pueblo Tapado in the municipio of Montenegro (Department of Quindío), dated roughly to the pre-Columbian period. The lost-wax method did not appear in Mexico until the 10th century, and was thereafter used in western Mexico to make a wide range of bell forms.
Literary history
Indirect evidence
Some early literary works allude to lost-wax casting. Columella, a Roman writer of the 1st century AD, mentions the processing of wax from beehives in De Re Rustica, perhaps for casting, as does Pliny the Elder, who details a sophisticated procedure for making Punic wax. One Greek inscription refers to the payment of craftsmen for their work on the Erechtheum in Athens (408/7–407/6 BC). Clay-modellers may use clay moulds to make terracotta negatives for casting or to produce wax positives. Pliny portrays as a well-reputed ancient artist producing bronze statues, and describes Lysistratos of Sikyon, who takes plaster casts from living faces to create wax casts using the indirect process.
Many bronze statues or parts of statues in antiquity were cast using the lost wax process. Theodorus of Samos is commonly associated with bronze casting. Pliny also mentions the use of lead, which is known to help molten bronze flow into all areas and parts of complex moulds. Quintilian documents the casting of statues in parts, whose moulds may have been produced by the lost wax process. Scenes on the early-5th century BC Berlin Foundry Cup depict the creation of bronze statuary working, probably by the indirect method of lost-wax casting.
Direct evidence
India
The lost-wax method is well documented in ancient Indian literary sources. The Shilpa Shastras, a text from the Gupta Period (–550 AD), contains detailed information about casting images in metal. The 5th-century AD Vishnusamhita, an appendix to the Vishnu Purana, refers directly to the modeling of wax for making metal objects in chapter XIV: "if an image is to be made of metal, it must first be made of wax." Chapter 68 of the ancient Sanskrit text Mānasāra Silpa details casting idols in wax and is entitled Maduchchhista Vidhānam, or the "lost wax method". The 12th century text Mānasollāsa, allegedly written by King Someshvara III of the Western Chalukya Empire, also provides detail about lost-wax and other casting processes.
In a 16th-century treatise, the Uttarabhaga of the Śilparatna written by Srïkumāra, verses 32 to 52 of Chapter 2 ("Linga Lakshanam"), give detailed instructions on making a hollow casting.
Theophilus
An early medieval writer Theophilus Presbyter, believed to be the Benedictine monk and metalworker Roger of Helmarshausen, wrote a treatise in the early-to-mid-12th century that includes original work and copied information from other sources, such as the Mappae clavicula and Eraclius, De dolorous et artibus Romanorum. It provides step-by-step procedures for making various articles, some by lost-wax casting: "The Copper Wind Chest and Its Conductor" (Chapter 84); "Tin Cruets" (Chapter 88), and "Casting Bells" (Chapter 85), which call for using "tallow" instead of wax; and "The Cast Censer". In Chapters 86 and 87 Theophilus details how to divide the wax into differing ratios before moulding and casting to achieve accurately tuned small musical bells. The 16th-century Florentine sculptor Benvenuto Cellini may have used Theophilus' writings when he cast his bronze Perseus with the Head of Medusa.
America
The Spanish writer Releigh (1596) in brief account refers to Aztec casting.
Gallery
| Technology | Metallurgy | null |
885379 | https://en.wikipedia.org/wiki/Flying%20shuttle | Flying shuttle | The flying shuttle is a type of weaving shuttle. It was a pivotal advancement in the mechanisation of weaving during the initial stages of the Industrial Revolution, and facilitated the weaving of considerably broader fabrics, enabling the production of wider textiles. Moreover, its mechanical implementation paved the way for the introduction of automatic machine looms.
The brainchild of John Kay, the flying shuttle received a patent in the year 1733 during the Industrial Revolution. Its implementation brought about an acceleration of the previously manual weaving process and resulted in a significant reduction in the required labour force. Formerly, a broad-cloth loom necessitated the presence of a weaver on each side, but with the advent of the flying shuttle, a solitary operator could handle the task proficiently. Prior to this breakthrough, the textile industry relied upon the coordination of four spinners to support a single weaver.
The widespread adoption of the flying shuttle by the 1750s dramatically exacerbated this labour imbalance, marking a notable shift in textile production dynamics.
History
The history of this device is difficult to accurately ascertain due to poor documentation at the time. Nonetheless, there are two general schools of thought around this: first those that believe that it appears to have been invented in the region of Languedoc of southern France (one year before its introduction in England), but was destroyed by state cloth inspectors of the rent-seeking Ancien Regime; second, those that believe it simply originated where it was industrialized, that is in England.
Operation
In a typical frame loom, as used previous to the invention of the flying shuttle, the operator sat with the newly woven cloth before them, using treadles or some other mechanism to raise and lower the heddles, which opened the shed in the warp threads. They then had to reach forward while holding the shuttle in one hand and pass this through the shed; the shuttle carried a bobbin for the weft. The shuttle then had to be caught in the other hand, the shed closed, and the beater pulled in against the fell to push the weft into place. This action (called a "pick") required regularly bending forward over the fabric.
More importantly, the coordination between the throwing and catching of the shuttle required that the weaver was weaving narrow cloth (typically or less). If the loom was for weaving broad cloth multiple weavers were needed: one on the left side at the shed, and one on the right side at the shed (and sometimes, one to operate the treadles). These two reached across the loom, passing the shuttle back and forth through the shed.
The flying shuttle employs a smooth board, called the "race," which runs, side to side, along the front of the beater, forming a track on which the shuttle runs. The lower threads of the shed rest on the track and the shuttle slides over them. At each end of the race, there is a box which catches the shuttle at the end of its journey, and which contains a mechanism for propelling the shuttle on its return trip (which may be yanked into action by the cord from the handheld picking-stick, or fully automated)
The shuttle itself has some subtle differences from the older form, especially for automated and powered looms. The ends of the shuttle are often bullet-shaped and metal-capped, and the shuttle generally has rollers to reduce friction. The weft thread is made to exit from the end rather than the side, and the thread is stored on a pirn (a long, conical, one-ended, non-turning bobbin) to allow it to feed more easily. Finally, the flying shuttle is generally somewhat heavier, so as to have sufficient momentum to carry it all the way through the shed.
Social effects
The increase in production due to the flying shuttle exceeded the capacity of the spinning industry of the day and prompted the development of powered spinning machines. Beginning with the spinning jenny and the waterframe until ultimately culminating in the spinning mule, which could produce strong, fine thread in the quantities needed these innovations transformed the textile industry in Great Britain. The innovation was seen as a threat to the livelihood of spinners & weavers, which resulted in an uprising that had Kay's patent largely ignored. It is often incorrectly written that Kay was attacked and fled to France, but in fact he simply moved there to attempt to rent out his looms, a business model that had failed him in England.
The flying shuttle produced a new source of injuries to the weaving process; if deflected from its path, it could be shot clear of the machine, potentially striking and injuring workers. Turn-of-the-century injury reports abound with instances in which eyes were lost or other injuries sustained and, in several instances (for example, an extended exchange in 1901), the British House of Commons was moved to take up the issue of installing guards and other contrivances to reduce these injuries.
Obsolescence
The flying shuttle dominated commercial weaving through the middle of the twentieth century. However, by that time, other systems had begun to replace it. The heavy shuttle was noisy and energy-inefficient (since the energy used to throw it was largely lost in the catching); also, its inertia limited the speed of the loom. Projectile and rapier looms eliminated the need to take the bobbin/pirn of thread through the shed; later, air- and water-jet looms reduced the weight of moving parts further. Flying shuttle looms are still used for some purposes, and old models remain in use.
| Technology | Weaving | null |
885435 | https://en.wikipedia.org/wiki/Water%20frame | Water frame | The water frame is a spinning frame that is powered by a water-wheel.
History
Richard Arkwright, who patented the technology in 1769, designed a model for the production of cotton thread, which was first used in 1765. The Arkwright water frame was able to spin 96 threads at a time, which was an easier and faster method than ever before. The design was partly based on a spinning machine built for Thomas Highs by clockmaker John Kay, who was hired by Arkwright. Being run on water power, it produced stronger and harder yarn than the "spinning jenny", and propelled the adoption of the modern factory system.
Another water-powered frame for the production of textiles was developed in 1760 in the early industrialized town of Elberfeld, Prussia (now in Wuppertal, Germany), by German bleach plant owner Johann Heinrich Bockmühl.
The name water frame is derived from the use of a water wheel to drive a number of spinning frames. The water wheel provided more power to the spinning frame than human operators, reducing the amount of human labor needed and increasing the spindle count dramatically. However, unlike the spinning jenny, the water frame could spin only one thread at a time until 1779, when Samuel Crompton combined the two inventions into his spinning mule, which was more effective.
The water frame was originally powered by horses at a factory built by Arkwright and partners in Nottingham. In 1770, Arkwright and his partners built a water-powered mill in Cromford, Derbyshire.
Cromford
In 1771, Arkwright installed the water frame in his cotton mill at Cromford, Derbyshire, on the River Derwent, creating one of the first factories that was specifically built to house machinery rather than just bring workers together. It was one of the first instances of the working day being determined by the clock instead of the daylight hours and of people being employed rather than just contracted. In its final form, combined with his carding machine, it was the first factory to use a continuous process from raw material to finished product in a series of operations.
Arkwright played a significant role in the development of the factory system as he combined water power, the water frame, and continuous production with modern employment practices.
International success
The water frame played a significant role in the development of the Industrial Revolution – first in England, but soon also in continental Europe after German entrepreneur Johann Gottfried Brügelmann managed to find out details of the technology, which had been kept very secret; disclosure of details was punishable by the death penalty. Brügelmann managed to build working water frames and used them to open the first spinning factory on the continent, built in 1783 in Ratingen and also named "Cromford", from where the technology spread over the world. The factory building today hosts a museum, which is the world's only place to see a functioning water frame.
Samuel Slater brought the water frame to America, circumventing the 1774 English ban on textile workers leaving and memorizing details of its construction; he left for New York in 1789. Moses Brown and Slater partnered to create the Slater Mill in Pawtucket in 1793, the first water-powered machine to make thread in America.
| Technology | Spinning | null |
885651 | https://en.wikipedia.org/wiki/Stationary%20point | Stationary point | In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" increasing or decreasing (hence the name).
For a differentiable function of several real variables, a stationary point is a point on the surface of the graph where all its partial derivatives are zero (equivalently, the gradient has zero norm).
The notion of stationary points of a real-valued function is generalized as critical points for complex-valued functions.
Stationary points are easy to visualize on the graph of a function of one variable: they correspond to the points on the graph where the tangent is horizontal (i.e., parallel to the -axis). For a function of two variables, they correspond to the points on the graph where the tangent plane is parallel to the plane.
The notion of a stationary point allows the mathematical description of an astronomical phenomenon that was unexplained before the time of Copernicus. A stationary point is the point in the apparent trajectory of the planet on the celestial sphere, where the motion of the planet seems to stop, before restarting in the other direction (see apparent retrograde motion). This occurs because of the projection of the planet orbit into the ecliptic circle.
Turning points
A turning point of a differentiable function is a point at which the derivative has an isolated zero and changes sign at the point. A turning point may be either a relative maximum or a relative minimum (also known as local minimum and maximum). A turning point is thus a stationary point, but not all stationary points are turning points. If the function is twice differentiable, the isolated stationary points that are not turning points are horizontal inflection points. For example, the function has a stationary point at , which is also an inflection point, but is not a turning point.
Classification
Isolated stationary points of a real valued function are classified into four kinds, by the first derivative test:
a local minimum (minimal turning point or relative minimum) is one where the derivative of the function changes from negative to positive;
a local maximum (maximal turning point or relative maximum) is one where the derivative of the function changes from positive to negative;
a rising point of inflection (or inflexion) is one where the derivative of the function is positive on both sides of the stationary point; such a point marks a change in concavity;
a falling point of inflection (or inflexion) is one where the derivative of the function is negative on both sides of the stationary point; such a point marks a change in concavity.
The first two options are collectively known as "local extrema". Similarly a point that is either a global (or absolute) maximum or a global (or absolute) minimum is called a global (or absolute) extremum. The last two options—stationary points that are not local extrema—are known as saddle points.
By Fermat's theorem, global extrema must occur (for a function) on the boundary or at stationary points.
Curve sketching
Determining the position and nature of stationary points aids in curve sketching of differentiable functions. Solving the equation (x) = 0 returns the x-coordinates of all stationary points; the y-coordinates are trivially the function values at those x-coordinates.
The specific nature of a stationary point at x can in some cases be determined by examining the second derivative (x):
If (x) < 0, the stationary point at x is concave down; a maximal extremum.
If (x) > 0, the stationary point at x is concave up; a minimal extremum.
If (x) = 0, the nature of the stationary point must be determined by way of other means, often by noting a sign change around that point.
A more straightforward way of determining the nature of a stationary point is by examining the function values between the stationary points (if the function is defined and continuous between them).
A simple example of a point of inflection is the function f(x) = x3. There is a clear change of concavity about the point x = 0, and we can prove this by means of calculus. The second derivative of f is the everywhere-continuous 6x, and at x = 0, = 0, and the sign changes about this point. So x = 0 is a point of inflection.
More generally, the stationary points of a real valued function are those
points x0 where the derivative in every direction equals zero, or equivalently, the gradient is zero.
Examples
For the function f(x) = x4 we have (0) = 0 and (0) = 0. Even though (0) = 0, this point is not a point of inflection. The reason is that the sign of (x) changes from negative to positive.
For the function f(x) = sin(x) we have (0) ≠ 0 and (0) = 0. But this is not a stationary point, rather it is a point of inflection. This is because the concavity changes from concave downwards to concave upwards and the sign of (x) does not change; it stays positive.
For the function f(x) = x3 we have (0) = 0 and (0) = 0. This is both a stationary point and a point of inflection. This is because the concavity changes from concave downwards to concave upwards and the sign of (x) does not change; it stays positive.
For the function f(x) = 0, one has (0) = 0 and (0) = 0. The point 0 is a non-isolated stationary point which is not a turning point nor a horizontal point of inflection as the signs of (x) and (x) do not change.
The function f(x) = x5 sin(1/x) for x ≠ 0, and f(0) = 0, gives an example where (x) and (x) are both continuous, (0) = 0 and (0) = 0, and yet f(x) does not have a local maximum, a local minimum, nor a point of inflection at 0. So, is a stationary point that is not isolated.
| Mathematics | Functions: General | null |
885835 | https://en.wikipedia.org/wiki/Conglomerate%20%28geology%29 | Conglomerate (geology) | Conglomerate () is a sedimentary rock made up of rounded gravel-sized pieces of rock surrounded by finer-grained sediments (such as sand, silt, or clay). The larger fragments within conglomerate are called clasts, while the finer sediment surrounding the clasts is called the matrix. The clasts and matrix are typically cemented by calcium carbonate, iron oxide, silica, or hardened clay.
Conglomerates form when rounded gravels deposited by water or glaciers become solidified and cemented by pressure over time. They can be found in sedimentary rock sequences of all ages but probably make up less than 1 percent by weight of all sedimentary rocks. They are closely related to sandstones in origin, and exhibit many of the same types of sedimentary structures, such as tabular and trough cross-bedding and graded bedding.
Fanglomerates are poorly sorted, matrix-rich conglomerates that originated as debris flows on alluvial fans and likely contain the largest accumulations of gravel in the geologic record.
Breccias are similar to conglomerates, but have clasts that have angular (rather than rounded) shapes.
Classification of conglomerates
Conglomerates may be named and classified by the:
Amount and type of matrix present
Composition of gravel-size clasts they contain
Size range of gravel-size clasts present
The classification method depends on the type and detail of research being conducted.
A sedimentary rock composed largely of gravel is first named according to the roundness of the gravel. If the gravel clasts that comprise it are largely well-rounded to subrounded, it is a conglomerate. If the gravel clasts that comprise it are largely angular, it is a breccia. Such breccias can be called sedimentary breccias to differentiate them from other types of breccia, e.g. volcanic and fault breccias. Sedimentary rocks that contain a mixture of rounded and angular gravel clasts are sometimes called breccio-conglomerate.
Texture
Conglomerates contain at least 30% of rounded to subangular clasts larger than in diameter, e.g., granules, pebbles, cobbles, and boulders. However, conglomerates are rarely composed entirely of gravel-size clasts. Typically, the space between the gravel-size clasts is filled by a mixture composed of varying amounts of silt, sand, and clay, known as matrix. If the individual gravel clasts in a conglomerate are separated from each other by an abundance of matrix such that they are not in contact with each other and float within the matrix, it is called a paraconglomerate. Paraconglomerates are also often unstratified and can contain more matrix than gravel clasts. If the gravel clasts of a conglomerate are in contact with each other, it is called an orthoconglomerate. Unlike paraconglomerates, orthoconglomerates are typically cross-bedded and often well-cemented and lithified by either calcite, hematite, quartz, or clay.
The differences between paraconglomerates and orthoconglomerates reflect differences in how they are deposited. Paraconglomerates are commonly either glacial tills or debris flow deposits. Orthoconglomerates are typically associated with aqueous currents.
Clast composition
Conglomerates are also classified according to the composition of their clasts. A conglomerate or any clastic sedimentary rock that consists of a single rock or mineral is known as either a monomict, monomictic, oligomict, or oligomictic conglomerate. If the conglomerate consists of two or more different types of rocks, minerals, or combination of both, it is known as either a polymict or polymictic conglomerate. If a polymictic conglomerate contains an assortment of the clasts of metastable and unstable rocks and minerals, it is called either a petromict or petromictic conglomerate.
In addition, conglomerates are classified by source as indicated by the lithology of the gravel-size clasts If these clasts consist of rocks and minerals that are significantly different in lithology from the enclosing matrix and, thus, older and derived from outside the basin of deposition, the conglomerate is known as an extraformational conglomerate. If these clasts consist of rocks and minerals that are identical to or consistent with the lithology of the enclosing matrix and, thus, penecontemporaneous and derived from within the basin of deposition, the conglomerate is known as an intraformational conglomerate.
Two recognized types of intraformational conglomerates are shale-pebble and flat-pebble conglomerates. A shale-pebble conglomerate is a conglomerate that is composed largely of clasts of rounded mud chips and pebbles held together by clay minerals and created by erosion within environments such as within a river channel or along a lake margin. Flat-pebble conglomerates (edgewise conglomerates) are conglomerates that consist of relatively flat clasts of lime mud created by either storms or tsunami eroding a shallow sea bottom or tidal currents eroding tidal flats along a shoreline.
Clast size
Finally, conglomerates are often differentiated and named according to the dominant clast size comprising them. In this classification, a conglomerate composed largely of granule-size clasts would be called a granule conglomerate; a conglomerate composed largely of pebble-size clasts would be called a pebble conglomerate; and a conglomerate composed largely of cobble-size clasts would be called a cobble conglomerate.
Sedimentary environments
Conglomerates are deposited in a variety of sedimentary environments.
Deepwater marine
In turbidites, the basal part of a bed is typically coarse-grained and sometimes conglomeratic. In this setting, conglomerates are normally very well sorted, well-rounded and often with a strong A-axis type imbrication of the clasts.
Shallow marine
Conglomerates are normally present at the base of sequences laid down during marine transgressions above an unconformity, and are known as basal conglomerates. They represent the position of the shoreline at a particular time and are diachronous.
Fluvial
Conglomerates deposited in fluvial environments are typically well rounded and poorly sorted. Clasts of this size are carried as bedload and only at times of high flow-rate. The maximum clast size decreases as the clasts are transported further due to attrition, so conglomerates are more characteristic of immature river systems. In the sediments deposited by mature rivers, conglomerates are generally confined to the basal part of a channel fill where they are known as pebble lags. Conglomerates deposited in a fluvial environment often have an AB-plane type imbrication.
Alluvial
Alluvial deposits form in areas of high relief and are typically coarse-grained. At mountain fronts individual alluvial fans merge to form braidplains and these two environments are associated with the thickest deposits of conglomerates. The bulk of conglomerates deposited in this setting are clast-supported with a strong AB-plane imbrication. Matrix-supported conglomerates, as a result of debris-flow deposition, are quite commonly associated with many alluvial fans. When such conglomerates accumulate within an alluvial fan, in rapidly eroding (e.g., desert) environments, the resulting rock unit is often called a fanglomerate.
Glacial
Glaciers carry a lot of coarse-grained material and many glacial deposits are conglomeratic. tillites, the sediments deposited directly by a glacier, are typically poorly sorted, matrix-supported conglomerates. The matrix is generally fine-grained, consisting of finely milled rock fragments. Waterlaid deposits associated with glaciers are often conglomeratic, forming structures such as eskers.
Examples
An example of conglomerate can be seen at Montserrat, near Barcelona. Here, erosion has created vertical channels that give the characteristic jagged shapes the mountain is named for (Montserrat literally means "jagged mountain"). The rock is strong enough to use as a building material, as in the Santa Maria de Montserrat Abbey.
Another example, the Crestone Conglomerate, occurs in and near the town of Crestone, at the foot of the Sangre de Cristo Range in Colorado's San Luis Valley. The Crestone Conglomerate consists of poorly sorted fanglomerates that accumulated in prehistoric alluvial fans and related fluvial systems. Some of these rocks have hues of red and green.
Conglomerate cliffs are found on the east coast of Scotland from Arbroath northwards along the coastlines of the former counties of Angus and Kincardineshire. Dunnottar Castle sits on a rugged promontory of conglomerate jutting into the North Sea just south of the town of Stonehaven.
Copper Harbor Conglomerate is found both in the Keweenaw Peninsula and Isle Royale National Park in Lake Superior.
Conglomerate may also be seen in the domed hills of Kata Tjuta, in Australia's Northern Territory or in the Buda Hills in Hungary.
In the nineteenth century a thick layer of Pottsville conglomerate was recognized to underlie anthracite coal measures in Pennsylvania.
Examples on Mars
On Mars, slabs of conglomerate have been found at an outcrop named "Hottah", and have been interpreted by scientists as having formed in an ancient streambed. The gravels, which were discovered by NASA's Mars rover Curiosity, range from the size of sand particles to the size of golf balls. Analysis has shown that the pebbles were deposited by a stream that flowed at walking pace and was ankle- to hip-deep.
Metaconglomerate
Metamorphic alteration transforms conglomerate into metaconglomerate.
| Physical sciences | Petrology | null |
885929 | https://en.wikipedia.org/wiki/Chesapeake%20Bay%20Bridge%E2%80%93Tunnel | Chesapeake Bay Bridge–Tunnel | The Chesapeake Bay Bridge–Tunnel (CBBT, officially the Lucius J. Kellam Jr. Bridge–Tunnel) is a bridge–tunnel that crosses the mouth of the Chesapeake Bay between Delmarva and Hampton Roads in the U.S. state of Virginia. It opened in 1964, replacing ferries that had operated since the 1930s. A major project to dualize its bridges was completed in 1999, and in 2017 a similar project was started to dualize one of its tunnels.
With of bridges and two tunnels, the CBBT is one of only 14 bridge–tunnel systems in the world and one of three in Hampton Roads. It carries US 13, which saves motorists roughly and hours on trips between Hampton Roads and the Delaware Valley and points north compared with other routes through the Washington–Baltimore Metropolitan Area. , over 140 million vehicles have crossed the CBBT.
The CBBT was built and is operated by the Chesapeake Bay Bridge and Tunnel District, a political subdivision of the Commonwealth of Virginia governed by the Chesapeake Bay Bridge and Tunnel Commission in cooperation with the Virginia Department of Transportation. Its construction was financed by toll revenue bonds, while operating and maintenance expenses are recovered through tolls. In 2002, a Joint Legislative Audit and Review Commission (JLARC) study commissioned by the Virginia General Assembly concluded that "given the inability of the state to fund future capital requirements of the CBBT, the District and Commission should be retained to operate and maintain the Bridge–Tunnel as a toll facility in perpetuity".
The tunnel sections addressed concerns that a bridge failure across critical shipping lanes would block not only shipping but navy access.
The CBBT is often confused with the similarly named Chesapeake Bay Bridge, which crosses the Chesapeake Bay farther north in Maryland connecting Annapolis and Kent Island.
History
Geographic background
In December 1606, the Virginia Company of London sent an expedition to North America to establish a settlement in the Colony of Virginia. After sailing across the Atlantic Ocean from England, they reached the New World at the southern edge of the mouth of what is now known as the Chesapeake Bay. They named the two flanking Virginia points of land /capes like gateposts at the entrance to the long extensive estuary after the sons of their king, James I, the southern Cape Henry, for the eldest and presumed heir, Henry Frederick, Prince of Wales, and the northern Cape Charles, for his younger brother, Charles, Duke of York (the future King Charles I). A few weeks later they established their first permanent settlement on the southern, mainland, side of the bay, several miles upstream along the newly named James River at Jamestown on the northern shore on a close-in island for protection, the first permanent settlement in English North America.
Across the bay, the area north of Cape Charles was located along what became known later as the Delmarva Peninsula. As it bordered the Atlantic Ocean to its east, the region became known as Virginia and neighboring Maryland's Eastern Shore. As the entire colony grew, the bay was a formidable transportation obstacle for exchanges with the Virginia mainland on the Western Shore. One of the eight original shires of Virginia, Accomac Shire was established there in 1634, eventually becoming the two counties of modern times, Accomack County in the north and Northampton County to the south. In comparison to mainland regions, commerce and growth was limited by the need to cross the Bay. Consequently, little industrial base grew there, with the oceanfront peninsula staying predominantly rural with small towns and villages oriented towards life on the waters, and most residents made their living by farming and working as watermen, both on the bay (locally known as the "bay side") and in the Atlantic Ocean ("sea side").
Ferry system
For the first 350 years, ships and ferry systems provided the primary transportation.
From the early 1930s to 1954, the Virginia Ferry Corporation (VFC), a privately owned public service company managed a scheduled vehicular (car, bus, truck) and passenger ferry service between the Virginia Eastern Shore and Princess Anne County (now part of the City of Virginia Beach) on the mainland Western Shore in the South Hampton Roads area. This system, connecting portions of US 13, was known as the Little Creek-Cape Charles Ferry. In 1951, the northern terminus in Delmarva was relocated to a location now within Kiptopeke State Park.
Despite an expanded fleet of large and modern ships by the VFC in the 1940s and early 1950s which were eventually capable of as many as 90 one-way trips each day, the crossing suffered delays due to heavy traffic and inclement weather.
In 1954, the Virginia General Assembly created a political subdivision, the Chesapeake Bay Ferry District and its governing body, the Chesapeake Bay Ferry Commission. The commission was authorized to acquire the private ferry corporation through bond financing, to improve the existing VFC ferry service.
When the CBBT opened, much of the ferry equipment and vessels used by the Little Creek-Cape Charles Ferry VFC service was sold and moved north to be redeployed to start the Cape May–Lewes Ferry across the mouth of the Delaware Bay between Cape May, New Jersey and Lewes, Delaware. It still serves transit needs, but the number of pleasure trip passengers increased as the coastal beach resorts developed and grew crowded with vacationers in the next decades, partly due to the improved swifter transportation with highway, bridge, and tunnel access in the region of three states.
Studying a fixed crossing
In 1956, the General Assembly authorized the Ferry Commission to conduct feasibility studies for the construction of a fixed crossing. The conclusion of the study indicated that a vehicular crossing was feasible.
Consideration was given to service between the Eastern Shore and both the peninsula and South Hampton Roads. Eventually, the shortest route, extending between the Eastern Shore and a point in Princess Anne County at Chesapeake Beach (east of Little Creek, west of Lynnhaven Inlet), was selected. An option to also provide a fixed crossing link to Hampton and the peninsula was not pursued.
The selected route crosses two Atlantic shipping channels: the Thimble Shoals Channel to Hampton Roads and the Chesapeake Channel to the northern Chesapeake Bay. High-level bridges were initially considered for traversing these channels. The United States Navy objected to bridging the Thimble Shoals Channel because a bridge collapse (possibly by sabotage) could cut Naval Station Norfolk off from the Atlantic Ocean. Maryland officials expressed similar concerns about the Chesapeake Channel and the Port of Baltimore.
To address these concerns, the engineers recommended a series of bridges and tunnels known as a bridge–tunnel, similar in design to the Hampton Roads Bridge–Tunnel, which had been completed in 1957, but on a considerably longer and larger facility. The tunnel portions, anchored by four artificial islands of approximately each, would be extended under the two main shipping channels. The CBBT was designed by the engineering firm Sverdrup & Parcel of St. Louis, Missouri.
Original construction
In mid-1960, the Chesapeake Bay Ferry Commission sold $200 million in toll revenue bonds (equivalent to $ billion in dollars) to private investors, and the proceeds were used to finance the construction of the bridge–tunnel. Funds collected by future tolls were pledged to pay the principal and interest on the bonds. No local, state, or federal tax funds were used in the construction of the project.
Construction contracts were awarded to a consortium of Tidewater Construction Corporation and Merritt-Chapman & Scott Corporation. The steel superstructure for the high-level bridges near the north end of the crossing were fabricated by the American Bridge Division of United States Steel Corporation. Construction of the bridge–tunnel began in October 1960 after a six-month process of assembling necessary equipment from worldwide sources.
The tunnels were constructed using the technique refined by Ole Singstad with the Baltimore Harbor Tunnel, whereby a large ditch was first dug for each tunnel, into which was lowered pre-fabricated tunnel sections cable-suspended from overhead barges. Interior chambers were filled with water to lower the sections, the sections then aligned, bolted together by divers, the water pumped out, and the tunnels finally covered with earth.
The construction was accomplished under the severe conditions imposed by nor'easters, hurricanes, and the unpredictable Atlantic Ocean. During the Ash Wednesday Storm of 1962, much of the partially completed work and a major piece of custom-built equipment, a pile driver barge called "The Big D", were destroyed. Seven workers were killed at various times during the construction. In April 1964, 42 months after construction began, the Chesapeake Bay Bridge–Tunnel opened to traffic and the ferry service discontinued.
The Ferry Commission and transportation district it oversees, created in 1954, were later renamed for the revised mission of building and operating the Chesapeake Bay Bridge–Tunnel. The CBBT district is a public agency, and it is a legal subdivision of the Commonwealth of Virginia. The bridge–tunnel is supported financially by the tolls collected from the motorists who use the facility.
Eastern Shore native, businessman, and civic leader Lucius J. Kellam Jr. (1911–1995) was the original commission's first chairman. In a commentary at the time of his death in 1995, the Norfolk-based Virginian-Pilot newspaper recalled that Kellam had been involved in bringing the multimillion-dollar bridge–tunnel project from dream to reality.
Before it was built, Kellam handled a political fight over the location, and addressed concerns of the U.S. Navy about prospective hazards to navigation to and from the Norfolk Navy Base at Sewell's Point.
Kellam was also directly involved in the negotiations to finance the ambitious crossing with bonds. According to the newspaper article, "there were not-unfounded fears that (1) storm-driven seas and drifting or off-course vessels could damage, if not destroy, the span and (2) traffic might not be sufficient to service the entire debt in an orderly way. Sure enough, bridge portions of the crossing have occasionally been damaged by vessels, and there was a long period when holders of the riskiest bonds received no interest on their investment."
An icon of eastern Virginia politics, Kellam remained chairman and champion of the CBBT throughout the hard times, and the bondholders were eventually paid as toll revenues caught up with expenses. He continued to serve until he was over 80 years old, finally retiring in 1993. He had held the post for 39 years.
The facility was renamed in Kellam's honor in 1987, over 20 years after it opened.
Bridge dualization (1999)
At a cost of $197 million, new parallel two-lane trestles were built both to alleviate traffic and for safety reasons. Immediately after completion of the parallel trestles, traffic was diverted to them and the original trestles and roadway underwent a $20 million retrofit, repairing the wear and tear of 35 years of service and upgrading certain features, such as repaving the road surface. The older portion of the facility was then reopened on April 19, 1999.
The 1995–1999 project increased the capacity of the above-water portion of the facility to four lanes, added wider shoulders for the new southbound portion, facilitated needed repairs, and provided protection against a total closure should a trestle be struck by a ship or otherwise damaged (which had occurred twice in the past); partially for this reason, the parallel trestles are not located immediately adjacent to each other, reducing the chance that both would be damaged during a single incident.
Thimble Shoal Tunnel dualization (projected 2027)
In 2013, the CBBT Commission approved a project to construct a second tunnel under the Thimble Shoal channel for an estimated cost of $756 million. The project received three bids, all of which would use a tunnel boring machine. The winning company was German-based Herrenknecht, whose machine was long. The machine, nicknamed Chessie in a naming contest, was capable of moving forward through soil at per minute, or about per day. At that rate, it was estimated that the tunnel would be dug within about one year. Construction work began in 2017 to prepare the location of the tunnels. The affected pier, shop, and restaurant were closed in September 2017. The machine was built in 2018, after some delays, and was shipped to Virginia. After boring, the machine will also be adding the circular concrete segments which will be delivered into the tunnel via mine cars one at a time. Construction was scheduled to finish in 2023. By August 2022, the second tunnel at Thimble Shoal had been delayed to 2027.
Tunnel length: approximately
Tunnel diameter:
Inner diameter:
Outer diameter:
Construction cost: $755,987,318
Construction method: Bored tunnel
Construction start (estimate): October 1, 2017
Construction completion (estimate): 2023
Maximum tunnel depth
Crown—at its deepest location (mid-channel): below the water surface
Invert—from the top of the roadway at its deepest location: below the surface
Soil removal: the approximate amount of soil to be removed by the tunnel boring machine (TBM) is .
Concrete sections: The tunnel will consist of approximately 9,000 individual concrete pieces. Approximately of concrete will be needed to make the tunnel sections.
Chesapeake Channel Tunnel dualization (projected 2035–2040)
At the northern end, a parallel Chesapeake Channel Tunnel will be added to finish the entire length to become a four-lane highway from shore to shore. This project is marked to begin in 2035, which would possibly be open for traffic in 2040, assuming there are no setbacks or delays.
In 2021, the United States Department of Transportation loaned $338.6 million to the Chesapeake Bay Bridge Tunnel District through the Transportation Infrastructure Finance and Innovation Act, with funds provided by the Infrastructure Investment and Jobs Act. The loan would help pay for the construction of both parallel tunnels.
Operations, maintenance, and regulations
Toll collection facilities are located at both ends of the facility. Tolls are paid in each direction. As of 2024, the toll for cars (without trailers) traveling along the CBBT is $16 for off-peak or $21 for peak times (Friday through Sunday from May 15 to September 15). Should a car make a return trip within 24 hours of the first, the second trip across costs $6/$1 for off peak/peak season, but only with an EZ-Pass; cash or card payers must pay full fare. Motorcycles pay the same toll as cars without trailers. All other vehicles are charged based on size and purpose and are not subject to the return-trip discount. All tolls must be paid either in cash, debit/credit card, by scrip tickets issued by the CBBT, or via E-ZPass electronic toll collection. The bridge–tunnel began accepting Smart Tag/E-ZPass payments on November 1, 2007.
All toll lanes including E-ZPass-only lanes are gated for safety concerns and to turn around inadmissible vehicles. For example:
Strong winds have blown over certain vehicles. Therefore, some vehicles are banned when the wind speed exceeds . Level 6 wind restrictions with hurricane-force winds (at least , i.e., approaching the wind speed of a Category 1 hurricane, which is at least ), and other inclement weather conditions ban all traffic.
Hazardous materials and compressed gas require various restrictions and inspections to safeguard the tunnels.
Both tunnels have a height limit of . An over-height truck in April 2007 severely damaged the tunnels. Repairs took three weeks.
Should police activities, accidents, or closures stop traffic from moving freely, gates prevent drivers from entering and then being forced to either back up within the narrow space or to wait too long in the middle of the bridge–tunnel.
The bridge–tunnel management prohibits bicycles but offers a shuttle van for $15. Cyclists must call ahead.
It is mandatory that the bridge be checked and serviced every five years. Since servicing the bridge takes about five years, the process is a continuous cycle.
The CBBT is the only automobile transportation facility in Virginia with its police department. By original charter from the state, it has authority to enforce the laws of Virginia. Emergency call boxes are spaced at half-mile (0.8 km) intervals.
Tourism
The CBBT promotes the bridge–tunnel as not only a transportation facility to tourist destinations to the north and south, but as a destination itself. For travelers headed elsewhere, the bridge–tunnel can save more than of driving for those headed between Ocean City, MD; Rehoboth Beach, DE, Fenwick Island, DE, and Wilmington, DE (and areas north) and the Virginia Beach area or the Outer Banks of North Carolina, according to the CBBT district. Unlike the Interstate Highways that travelers would avoid by taking the bridge–tunnel, the roads in the shortcut have traffic lights.
On the Delmarva peninsula to the north of the bridge, travelers may visit nearby Kiptopeke State Park, Eastern Shore National Wildlife Refuge, Fisherman Island National Wildlife Refuge (closed to the public), Assateague Island National Seashore, NASA's Wallops Flight Facility, campgrounds and other vacation destinations. To the south are tourist destinations around Virginia Beach, including First Landing State Park, Norfolk Botanical Garden, Virginia Beach Maritime Historical Museum, Atlantic Wildfowl Heritage Museum, and the Virginia Aquarium and Maritime Science Center.
A scenic overlook is located at the north end of the bridge and was formerly located at South Thimble Island, near the south end. At South Thimble Island, passing ships may include U.S. Navy warships, nuclear submarines, and aircraft carriers, as well as large cargo vessels and sailing ships. A restaurant and gift shop on the island opened in 1964, along with the Sea Gull Pier. Bluefish, trout, croaker, flounder, and other species have been caught from the pier. Since birds use the habitat created by the bridges and islands of the CBBT, birders have travelled to the bridge–tunnel to see them at South Thimble Island and the scenic overlook at the north end. As part of the Thimble Shoal Channel Tunnel twinning, the building housing the restaurant and gift shop closed and access to the pier was prohibited starting at the end of September 2017. The building will be demolished and not replaced, and the pier will reopen to the public at the end of the project in 2027.
Dimensions
Among the key features of the Chesapeake Bay Bridge–Tunnel are two tunnels beneath the Thimble Shoals and Chesapeake navigation channels and two pairs of side-by-side high-level bridges over two other navigation channels: North Channel Bridge ( clearance) and Fisherman Inlet Bridge ( clearance). The remaining portion comprises of low-level trestle, of causeway, and four artificial islands.
The CBBT is long from shore to shore, crossing what is essentially an ocean strait. Including land-approach highways, the overall facility is long ( from toll plaza to toll plaza) and despite its length, there is a height difference of only from the south to north end of the bridge–tunnel.
Artificial islands, each approximately in size, are located at each end of the two tunnels. Between North Channel and Fisherman Inlet, the facility crosses at grade over Fisherman Island, a barrier island that is part of the Eastern Shore of Virginia National Wildlife Refuge administered by the U.S. Fish and Wildlife Service.
The columns that support the CBBT's trestles—called piles—would stretch for about if placed end-to-end, roughly the distance between New York City and Philadelphia.
Incidents
The CBBT has been closed three times for multiple days after being struck by watercraft:
In December 1967, coal barge Mohawk broke anchor and struck the bridge, closing it for two weeks for repairs.
On January 21, 1970, the USS Yancey (AKA-93), a United States Navy attack cargo ship carrying 250 people, was at anchor near the bridge–tunnel. During a gale with winds gusting in excess of , the Yancey dragged its anchors and hit the bridge stern first, knocking out a segment of trestle. There were no vehicles on the bridge at the time of the impact, and no one was injured. During the 42 days it took to replace the damaged span, the Navy offered a free shuttle service for commuters using helicopters and LCUs.
In 1972, the bridge was again impacted by a barge that had broken loose, closing it for two weeks while the span was repaired.
Other, less significant strikes have caused shorter closures while the affected structures are inspected—most recently, a four hour closure after a barge strike in June 2011.
, there have been 16 incidents of vehicles running off the bridge and into the water. In 2017, a truck plowed through the barriers into the sea below; the driver was rescued but died en route to the hospital. In December 2020, a dairy truck crashed through the guardrail near mile 14. Witnesses saw the driver drifting in the water—estimated to be about —but were unable to rescue him. Despite an extensive search, he remained missing until April 2021, when his body washed up over south at Cape Hatteras National Seashore between Salvo and Avon. A second truck following the dairy truck encountered strong wind gusts just prior to the accident that blew it into the other lane.
| Technology | Multi-modal crossings | null |
886283 | https://en.wikipedia.org/wiki/Britannia%20Bridge | Britannia Bridge | Britannia Bridge () is a bridge in Wales that crosses the Menai Strait between the Isle of Anglesey and city of Bangor. It was originally designed and built by the noted railway engineer Robert Stephenson as a tubular bridge of wrought iron rectangular box-section spans for carrying rail traffic. Its importance was to form a critical link of the Chester and Holyhead Railway's route, enabling trains to directly travel between London and the port of Holyhead, thus facilitating a sea link to Dublin, Ireland.
Decades before the building of the Britannia Bridge, the Menai Suspension Bridge had been completed, but this structure carried a road rather than track; there was no rail connection to Anglesey before its construction. After many years of deliberation and proposals, on 30 June 1845, a Parliamentary Bill covering the construction of the Britannia Bridge received royal assent. At the Admiralty's insistence, the bridge elements were required to be relatively high in order to permit the passage of a fully rigged man-of-war. In order to meet the diverse requirements, Stephenson, the project's chief engineer, performed in-depth studies on the concept of tubular bridges. For the detailed design of the structure's girders, Stephenson gained the assistance of distinguished engineer William Fairbairn. On 10 April 1846, the foundation stone for the Britannia Bridge was laid. The construction method used for the riveted wrought iron tubes was derived from contemporary shipbuilding practices; the same technique as used for the Britannia Bridge was also used on the smaller Conwy Railway Bridge. On 5 March 1850, Stephenson himself fitted the last rivet of the structure, marking the bridge's official completion.
On 3 March 1966, the Britannia Bridge received Grade II listed status.
A fire in May 1970 caused extensive damage to the Britannia Bridge. Subsequent investigation determined that the damage to the tubes was so extensive that they were not realistically repairable. The bridge was rebuilt in a quite different configuration, reusing the piers while employing new arches to support not one but two decks, as the new Britannia Bridge was to function as a combined road-and-rail bridge. The bridge was rebuilt in phases, initially reopening in 1972 as a single-tier steel truss arch bridge, carrying only rail traffic. Over the next eight years more of the structure was replaced, allowing for more trains to run and a second tier to be completed. The second tier was opened to accommodate road traffic in 1980. The bridge was subject to a £4 million four-month in-depth maintenance programme during 2011. Since the 1990s, there has been talk of increasing road capacity over the Menai Strait, either by extending the road deck of the existing bridge or via the construction of a third bridge.
Design
The opening of the Menai Bridge in 1826, to the east of where Britannia Bridge was later built, provided the first fixed road link between Anglesey and the mainland. The increasing popularity of rail travel shortly necessitated a second bridge to provide a direct rail link between London and the port of Holyhead, the Chester and Holyhead Railway.
Other railway schemes were proposed, including one in 1838 to cross Thomas Telford's existing Menai Bridge. Railway pioneer George Stephenson was invited to comment on this proposal but stated his concern about re-using a single carriageway of the suspension bridge, as bridges of this type were unsuited to locomotive use. By 1840, a Treasury committee decided broadly in favour of Stephenson's proposals, however, final consent to the route, including Britannia Bridge, would not be granted until 30 June 1845, the date on which the corresponding Parliamentary Bill received royal assent. Around the same time, Stephenson's son, Robert Stephenson, was appointed as chief engineer for the project.
At the Admiralty's insistence, any bridge would have to permit passage of the strait by a fully rigged man-of-war. Stephenson therefore intended to cross the strait at a high level, over , by a bridge with two main spans of , rectangular iron tubes, each weighing , supported by masonry piers, the centre one of which was to be built on the Britannia Rock. Two additional spans of length would complete the bridge, making a continuous girder. The trains were to run inside the tubes (inside the box girders). Up until then, the longest wrought iron span had been , barely one fifteenth of the bridge's spans of . As originally envisaged by Stephenson, the tubular construction would give a structure sufficiently stiff to support the heavy loading associated with trains, but the tubes would not be fully self-supporting, some of their weight having to be taken by suspension chains.
For the detailed design of the girders, Stephenson secured the assistance of the distinguished engineer William Fairbairn, an old friend of his father and described by Stephenson as "well known for his thorough practical knowledge in such matters". Fairbairn began a series of practical experiments on various tube shapes and enlisted the help of Eaton Hodgkinson "distinguished as the first scientific authority on the strength of iron beams" It became apparent from Fairbairn's experiments that- without special precautions - the failure mode for the tube under load would be buckling of the top plate in compression, the theoretical analysis of which gave Hodgkinson some difficulty. When Stephenson reported to the directors of the railway in February 1846, he attached reports by both Hodgkinson and Fairbairn. From his analysis of the resistance to buckling of tubes with single top plates, Hodgkinson believed that it would require an impracticably thick (and therefore heavy) top plate to make the tubes stiff enough to support their own weight, and advised auxiliary suspension from link chains.
However, Fairbairn's experiments had moved on from those covered by Hodgkinson's theory to include designs in which the top plate was stiffened by 'corrugation' (the incorporation of cylindrical tubes). The results of these later experiments he found very encouraging; whilst it was still to be determined what the optimum form of the tubular girder should be "I would venture to state that a Tubular Bridge can be constructed of such powers and dimensions as will meet, with perfect security, the requirements of railway traffic across the Straits" although it might require more materials than originally envisaged and the utmost care would be needed in its construction. He believed it would be 'highly improper' to rely upon chains as the principal support of the bridge. Under every circumstance, I am of opinion that the tubes should be made sufficiently strong to sustain not only their own weight, but in addition to that load 2000 tons equally distributed over the surface of the platform, a load ten times greater than they will ever be called upon to support.
In fact, it should be a huge sheet-iron hollow girder, of sufficient strength and stiffness to sustain those weights; and, provided that the parts are well-proportioned and the plates properly riveted, you may strip off the chains and have it as a useful monument of the enterprise and energy of the age in which it was constructed.
Stephenson's report drew attention to the difference of opinion between his experts, but reassured the directors that the design of the masonry piers allowed for the tubes to be given suspension support, and no view need yet be taken as to the need for it, which would be resolved by further experiments. A span model was constructed and tested at Fairbairn's Millwall shipyard, and used as a basis for the final design. Stephenson, who had not previously attended any of Fairbairn's experiments, was present at one involving this 'model tube', and consequently was persuaded that auxiliary chains were unnecessary. No chains were fitted. As the only purpose of the piers (above the level of the present road deck) was to support the chains, these piers have never had any practical use. Although Stephenson had pressed for the tubes to be elliptical in section, Fairbairn's preferred rectangular section was adopted. Fairbairn was responsible both for the cellular construction of the top part of the tubes, and for developing the stiffening of the side panels. Each main span weighed roughly 1,830 tonnes.
Construction and use
On 10 April 1846, the foundation stone for the Britannia Bridge was laid, marking the official commencement of construction work at the site. The resident engineer for the structure's construction was civil engineer Edwin Clark, who had previously aided Stephenson in performing the complex structural stress calculations involved in its design process. The first major elements of the structure to be built were the side tubes, this work was performed in situ, using wooden platforms to support it. The construction method used for the iron tubes was derived from contemporary shipbuilding practices, being composed of riveted wrought iron plates thick, complete with sheeted sides and cellular roofs and bases. The same technique as used for the Britannia Bridge was also used on the smaller Conwy Railway Bridge, which was built around the same time. On 10 August 1847, the first rivet was driven.
Working in parallel to the onsite construction process, the two central tube sections, which weighed apiece, were separately built on the nearby Caernarfon shoreline. Once they had been fully assembled, each of the central tubes was floated, one at a time, into the causeway and directly below the structure. The in-place sections were gradually raised into place using powerful hydraulic cylinders; they were only raised by a few inches at a time, after which supports would be built underneath the section to keep it in place. This aspect of the bridge's construction was novel at the time. Reportedly, the innovative process had been responsible for costing Stephenson several nights of sleep at one stage of the project. The work did not go smoothly; at one point, one of the tubes allegedly came close to being swept out to sea before being recaptured and finally pushed back into place. The tubes were manoeuvred into place between June 1849 and February 1850.
Once in place, the separate lengths of tube were joined to form parallel prestressed continuous structures, each one possessing a length of and weighing . The pre-stressing process had increased the structure's loadbearing capacity and reduced deflection. The tubes had a width of and differed between and in overall depth, while also having a gap between them; they were supported on a series of cast iron beams that were embedded in the stonework of the towers. To better protect the iron from the weather, an arched timber roof was constructed to cover both tubes; it was roughly wide, continuous over their whole length, and covered with tarred hessian. A wide central walkway was present above the roof for the purpose of producing maintenance access.
On 5 March 1850, Stephenson himself fitted the last rivet of the structure, marking the bridge's official completion. Altogether, the bridge had taken over three years to complete. On 18 March 1850, a single tube was opened to rail traffic. By 21 October of that year, both tubes had been opened to traffic.
For its time, the Britannia Bridge was a structure of "magnitude and singular novelty", far surpassing in length both contemporary cast beam or plate girder iron bridges. The noted engineer Isambard Kingdom Brunel, a professional rival and personal friend of Stephenson's, was claimed to have remarked to him: "If your bridge succeeds, then mine have all been magnificent failures". On 20 June 1849, Brunel and Stephenson had both looked on as the first of the bridge's tubes was floated out on its pontoons. The construction techniques employed on the Britannia Bridge had obviously influenced Brunel as he later made use of the same method of floating bridge sections during the construction of the Royal Albert Bridge across the River Tamar at Saltash.
There was originally a railway station located on the east side of the bridge at the entrance to the tunnel, run by the Chester and Holyhead Railway company, which served local rail traffic in both directions. However, this station was closed after only years in operation owing to low passenger volumes. In the present day, little remains of this station, other than the remnants of the lower-level station building. A new station named Menai Bridge was opened shortly afterwards.
Lions
The bridge was decorated by four large lions sculpted in limestone by John Thomas, two at either end. Each was constructed from 11 pieces of limestone. They are long, tall, and weigh 30 tons.
These were immortalised in the following Welsh rhyme by the bard John Evans (1826–1888), who was born in nearby Menai Bridge:
Pedwar llew tew
Heb ddim blew
Dau 'ochr yma
A dau 'ochr drew
Four fat lions
Without any hair
Two on this side
And two over there
The lions cannot be seen from the A55, which crosses the modern bridge on the same site, although they can be seen from trains on the North Wales Coast Line below. The idea of raising them to road level has been suggested by local campaigners from time to time.
Fire and reconstruction
During the evening of 23 May 1970, the bridge was heavily damaged when boys playing inside the structure dropped a burning torch, setting alight the tar-coated wooden roof of the tubes. Despite the best efforts of the Caernarfonshire and Anglesey fire brigades, the bridge's height, construction, and the lack of an adequate water supply meant they were unable to control the fire, which spread all the way across from the mainland to the Anglesey side. After the fire had burned itself out, the bridge was still standing. However, the structural integrity of the iron tubes had been critically compromised by the intense heat; they had visibly split open at the three towers and had begun to sag. It was recognised that there was still danger of the structure collapsing; the bridge would be unusable until major restorative work was done.
In light of events, the chief civil engineer of British Railways' London Midland region, W.F. Beatty, sought structural advice from consulting engineering company Husband & Co. Following an in-depth investigation of the site performed by the company, it was determined that the cast iron beams inside the towers had suffered substantial cracking and tilting, meaning that the tubes required immediate support at all three towers. The Royal Engineers were quickly brought in to save the bridge, rapidly deploying vertical Bailey bridge units to fill the original jacking slots in the masonry towers. By the end of July 1970, a total of eight Bailey bridge steel towers had been erected, each being capable of bearing a vertical load of around 200 tonnes.
Further analysis showed that the wrought iron tubes had been too badly damaged to be retained. In light of this discovery, it was decided to dismantle the tubes in favour of replacing them with a new deck at the same level as the original tracks. With the exception of the original stone substructure, the structure was completely rebuilt by Cleveland Bridge & Engineering Company. The superstructure of the new bridge was to include two decks: a lower rail deck supported by steel arches and an upper deck constructed out of reinforced concrete, to carry a new road crossing over the strait. Concrete supports were built under the approach spans and steel archways constructed under the long spans on either side of the central Britannia Tower. The two long spans are supported by arches, which had not been an option for the original structure as a result of the clearance needed for tall-masted vessels; modern navigational requirements require much less headroom.
The bridge was rebuilt in stages. The first stage was to erect the new steel arches under the two original wrought-iron tubes. The arches were completed, and single-line working was restored to the railway on 30 January 1972 by reusing one of the tubes. The next stage was to dismantle and remove the other tube and replace it with a concrete deck for the other rail track. Then the single-line working was transferred to the new track (on the west side); this allowed the other tube to be removed and replaced with a concrete deck (which is used only for service access) by 1974. Finally the upper road deck was installed and by July 1980, over 10 years after the fire, the new road crossing was completed, and formally opened by the Prince of Wales, carrying a single-carriageway section of the A5 road (now the A55).
During 2011, national railway infrastructure owner Network Rail, the Welsh Assembly Government and the English Highways Agency undertook a £4 million joint programme to strengthen the 160-year-old structure and improve its reliability. The work involved the replacement of eroded steelwork, repairs to the drainage system, restoration of the parapets and stonework, and the painting of the steel approach portals of the bridge. The programme included a detailed inspection of the internal chambers of the three towers and the construction of a special walkway to enable easier and safer access to the structure for future inspections of the masonry piers; special protective efforts adopted for the work included the use of special pollution-minimising paint and the decontamination of all equipment before being brought onsite.
Proposed bridge improvement
In November 2007, a public consultation exercise into the ‘A55 Britannia Bridge Improvement’ commenced. The perceived problems stated include:
It is the only non-dual-carriageway section along the A55
Congestion during morning and afternoon peak periods
Congestion from seasonal and ferry traffic from Holyhead
Queuing at the junctions at either end
Traffic is expected to significantly increase over the next ten years or so
In the document, four options are presented, each with their own pros and cons:
Do nothing. Congestion will increase as traffic levels increase.
Widen existing bridge. To do this, the towers would have to be removed to make room for the extra lanes. This is an issue as the bridge is a Grade 2 listed structure and is owned by Network Rail. The extra lanes would have to be of reduced width as the existing structure is not capable of supporting four full-width lanes.
New multi-span concrete box bridge alongside. Building a separate bridge would allow the existing bridge to be used as normal during construction. The bridge would require support pillar(s) in the Menai Strait, which is an environmental issue as the strait is a Special Area of Conservation. Visual impact would be low as the pillars and road surface would be aligned with the current bridge.
New single span cable-stayed bridge. This would eliminate the need for pillars in the Strait, but the bridge would have a large impact on the landscape due to the height of the cable support pillars. This is also the most costly option.
Respondents were overwhelmingly in favour of seeing some improvements, with 70 per cent favouring the solution of building another bridge.
Similar bridges
Very few other tubular iron bridges were ever built since more economical bridge designs were soon developed. The most notable of the other tubular bridges were Stephenson's Conwy railway bridge between Llandudno Junction and Conwy, the first Sainte-Anne-de-Bellevue (Québec) Grand Trunk Railway bridge, which was the prototype of the Victoria Bridge across the Saint Lawrence River at Montreal.
The Conwy railway bridge remains in use, and is the only remaining tubular bridge; however, intermediate piers have been added to strengthen it. The bridge can be seen at close quarters from Thomas Telford's adjacent 1826 Conwy Suspension Bridge.
The Victoria Bridge was the first bridge to cross the St. Lawrence River, and was the longest bridge in the world when it was completed in 1859. It was rebuilt as a truss bridge in 1898.
| Technology | Bridges | null |
3726299 | https://en.wikipedia.org/wiki/Blunt%20trauma | Blunt trauma | A blunt trauma, also known as a blunt force trauma or non-penetrating trauma, is a physical trauma due to a forceful impact without penetration of the body's surface. Blunt trauma stands in contrast with penetrating trauma, which occurs when an object pierces the skin, enters body tissue, and creates an open wound. Blunt trauma occurs due to direct physical trauma or impactful force to a body part. Such incidents often occur with road traffic collisions, assaults, and sports-related injuries, and are notably common among the elderly who experience falls.
Blunt trauma can lead to a wide range of injuries including contusions, concussions, abrasions, lacerations, internal or external hemorrhages, and bone fractures. The severity of these injuries depends on factors such as the force of the impact, the area of the body affected, and the underlying comorbidities of the affected individual. In some cases, blunt force trauma can be life-threatening and may require immediate medical attention. Blunt trauma to the head and/or severe blood loss are the most likely causes of death due to blunt force traumatic injury.
Classification
Blunt abdominal trauma
Blunt abdominal trauma (BAT) represents 75% of all blunt trauma and is the most common example of this injury. Seventy-five percent of BAT occurs in motor vehicle crashes, in which rapid deceleration may propel the driver into the steering wheel, dashboard, or seatbelt, causing contusions in less serious cases, or rupture of internal organs from briefly increased intraluminal pressure in the more serious, depending on the force applied. Initially, there may be few indications that serious internal abdominal injury has occurred, making assessment more challenging and requiring a high degree of clinical suspicion.
There are two basic physical mechanisms at play with the potential of injury to intra-abdominal organs: compression and deceleration. The former occurs from a direct blow, such as a punch, or compression against a non-yielding object such as a seat belt or steering column. This force may deform a hollow organ, increasing its intraluminal or internal pressure and possibly leading to rupture.
Deceleration, on the other hand, causes stretching and shearing at the points where mobile contents in the abdomen, like the bowel, are anchored. This can cause tearing of the mesentery of the bowel and injury to the blood vessels that travel within the mesentery. Classic examples of these mechanisms are a hepatic tear along the ligamentum teres and injuries to the renal arteries.
When blunt abdominal trauma is complicated by 'internal injury,' the liver and spleen (see blunt splenic trauma) are most frequently involved, followed by the small intestine.
In rare cases, this injury has been attributed to medical techniques such as the Heimlich maneuver, attempts at CPR and manual thrusts to clear an airway. Although these are rare examples, it has been suggested that they are caused by applying excessive pressure when performing these life-saving techniques. Finally, the occurrence of splenic rupture with mild blunt abdominal trauma in those recovering from infectious mononucleosis or 'mono' (also known as 'glandular fever' in non-U.S. countries, specifically the UK) is well reported.
Blunt abdominal trauma in sports
The supervised environment in which most sports injuries occur allows for mild deviations from the traditional trauma treatment algorithms, such as ATLS, due to the greater precision in identifying the mechanism of injury. The priority in assessing blunt trauma in sports injuries is separating contusions and musculo-tendinous injuries from injuries to solid organs and the gut. It is also crucial to recognize the potential for developing blood loss and to react accordingly. Blunt injuries to the kidney from helmets, shoulder pads, and knees are described in American football, association football, martial arts, and all-terrain vehicle crashes.
Blunt thoracic trauma
The term blunt thoracic trauma, or, more informally, blunt chest injury, encompasses a variety of injuries to the chest. Broadly, this also includes damage caused by direct blunt force (such as a fist or a bat in an assault), acceleration or deceleration (such as that from a rear-end automotive crash), shear force (a combination of acceleration and deceleration), compression (such as a heavy object falling on a person), and blasts (such as an explosion of some sort). Common signs and symptoms include something as simple as bruising, but occasionally as complicated as hypoxia, ventilation-perfusion mismatch, hypovolemia, and reduced cardiac output due to the way the thoracic organs may have been affected. Blunt thoracic trauma is not always visible from the outside and such internal injuries may not show signs or symptoms at the time the trauma initially occurs or even until hours after. A high degree of clinical suspicion may sometimes be required to identify such injuries, a CT scan may prove useful in such instances. Those experiencing more obvious complications from a blunt chest injury will likely undergo a focused assessment with sonography for trauma (FAST) which can reliably detect a significant amount of blood around the heart or in the lung by using a special machine that visualizes sound waves sent through the body. Only 10–15% of thoracic traumas require surgery, but they can have serious impacts on the heart, lungs, and great vessels.
The most immediate life-threatening injuries that may occur include tension pneumothorax, open pneumothorax, hemothorax, flail chest, cardiac tamponade, and airway obstruction/rupture.
The injuries may necessitate a procedure, most commonly the insertion of an intercostal drain, or chest tube. This tube is typically installed because it helps restore a certain balance in pressures (usually due to misplaced air or surrounding blood) that are impeding the lungs' ability to inflate and thus exchange vital gases that allow the body to function. A less common procedure that may be employed is a pericardiocentesis, which, by removing blood surrounding the heart, permits the heart to regain some ability to appropriately pump blood. In certain dire circumstances an emergent thoracotomy may be employed.
Blunt cranial trauma
The primary clinical concern with blunt trauma to the head is damage to the brain, although other structures, including the skull, face, orbits, and neck are also at risk. Following assessment of the patient's airway, circulation, and breathing, a cervical collar may be placed if there is suspicion of trauma to the neck. Evaluation of blunt trauma to the head continues with the secondary survey for evidence of cranial trauma, including bruises, contusions, lacerations, and abrasions. In addition to noting external injury, a comprehensive neurologic exam is typically performed to assess for damage to the brain. Depending on the mechanism of injury and examination, a CT scan of the skull and brain may be ordered. This is typically done to assess for blood within the skull or fracture of the skull bones.
Traumatic brain injury (TBI)
Traumatic brain injury (TBI) is a significant cause of morbidity and mortality and is most commonly caused by falls, motor vehicle crashes, sports- and work-related injuries, and assaults. It is the most common cause of death in patients under the age of 25. TBI is graded from mild to severe, with greater severity correlating with increased morbidity and mortality.
Most patients with more severe traumatic brain injury have a combination of intracranial injuries, which can include diffuse axonal injury, cerebral contusions, and intracranial bleeding, including subarachnoid hemorrhage, subdural hematoma, epidural hematoma, and intraparenchymal hemorrhage. The recovery of brain function following a traumatic injury is highly variable and depends upon the specific intracranial injuries that occur. However, there is a significant correlation between the severity of the initial insult as well as the level of neurologic function during the initial assessment and the level of lasting neurologic deficits. Initial treatment may be targeted at reducing the intracranial pressure if there is concern for swelling or bleeding within this skull. This may require surgery, such as a hemicraniectomy, in which part of the skull is removed.
Blunt trauma to extremities
Injury to extremities (like arms, legs, hands, feet) is extremely common. Falls are the most common etiology, making up as much as 30% of upper and 60% of lower extremity injuries. The most common mechanism for solely upper extremity injuries is machine operation or tool use. Work-related accidents and vehicle crashes are also common causes. The injured extremity is examined for four major functional components which include soft tissues, nerves, vessels, and bones. Vessels are examined for expanding hematoma, bruit, distal pulse exam, and signs/symptoms of ischemia, essentially asking, "Does blood seem to be getting through the injured area in a way that enough is getting to the parts past the injury?" When it is not obvious that the answer is "yes", an injured extremity index or ankle-brachial index may be used to help guide whether further evaluation with computed tomography arteriography. This uses a special scanner and a substance that makes it easier to examine the vessels in finer detail than what the human hand can feel or the human eye can see. Soft tissue damage can lead to rhabdomyolysis (a rapid breakdown of injured muscle that can overwhelm the kidneys) or may potentially develop compartment syndrome (when pressure builds up in muscle compartments damages the nerves and vessels in the same compartment). Bones are evaluated with plain film X-ray or computed tomography if deformity (misshapen), bruising, or joint laxity (looser or more flexible than usual) are observed. Neurologic evaluation involves testing the major nerve functions of the axillary, radial, and median nerves in the upper extremity as well as the femoral, sciatic, deep peroneal, and tibial nerves in the lower extremity. Depending on the extent of injury and involved structures, surgical treatment may be necessary, but many are managed nonoperatively.
Blunt pelvic trauma
The most common causes of blunt pelvic trauma are motor vehicle crashes and multiple-story falls, and thus pelvic injuries are commonly associated with additional traumatic injuries in other locations. In the pelvis specifically, the structures at risk include the pelvic bones, the proximal femur, major blood vessels such as the iliac arteries, the urinary tract, reproductive organs, and the rectum.
One of the primary concerns is the risk of pelvic fracture, which itself is associated with a myriad of complications including bleeding, damage to the urethra and bladder, and nerve damage. If pelvic trauma is suspected, emergency medical services personnel may place a pelvic binder on patients to stabilize the patient's pelvis and prevent further damage to these structures while patients are transported to a hospital. During the evaluation of trauma patients in an emergency department, the stability of the pelvis is typically assessed by the healthcare provider to determine whether a fracture may have occurred. Providers may then decide to order imaging such as an X-ray or CT scan to detect fractures; however, if there is concern for life-threatening bleeding, patients should receive an X-ray of the pelvis. Following initial treatment of the patient, fractures may need to be treated surgically if significant, while some minor fractures may heal without requiring surgery.
A life-threatening concern is hemorrhage, which may result from damage to the aorta, iliac arteries, or veins in the pelvis. The majority of bleeding due to pelvic trauma is due to injury to the veins. Fluid (often blood) may be detected in the pelvis via ultrasound during the FAST scan that is often performed following traumatic injuries. Should a patient appear hemodynamically unstable in the absence of obvious blood on the FAST scan, there may be concern for bleeding into the retroperitoneal space, known as retroperitoneal hematoma. Stopping the bleeding may require endovascular intervention or surgery, depending on the location and severity.
Blunt cardiac trauma
Blunt cardiac trauma, also known as Blunt Cardiac Injury (BCI), encompasses a spectrum of cardiac injuries resulting from blunt force trauma to the chest. While BCIs necessitate a substantial amount of force to occur because the heart is well-protected by the rib cage and sternum, the majority of patients are asymptomatic. Clinical presentations may range from minor, clinically insignificant changes to heartbeat or may progress to severe cardiac failure and death. Oftentimes, chest wall injuries are seen in conjunction with BCI, which confounds the presence of chest pain experienced by most patients. To evaluate the spectrum of cardiac injury, the American Association for the Surgery of Trauma (AAST) organ injury scale may be used to aid in determining the extent of the injury (see Evaluation and Diagnosis below). BCI may be broken down into pericardial injury, valvular injuries, coronary artery injuries, cardiac chamber rupture, and myocardial contusion.
Evaluation and diagnosis
In most settings, the initial evaluation and stabilization of traumatic injury follows the same general principles of identifying and treating immediately life-threatening injuries. In the US, the American College of Surgeons publishes the Advanced Trauma Life Support guidelines, which provide a step-by-step approach to the initial assessment, stabilization, diagnostic reasoning, and treatment of traumatic injuries that codifies this general principle. The assessment typically begins by ensuring that the subject's airway is open and competent, that breathing is unlabored, and that circulation—i.e. pulses that can be felt—is present. This is sometimes described as the "A, B, C's"—Airway, Breathing, and Circulation—and is the first step in any resuscitation or triage. Then, the history of the accident or injury is amplified with any medical, dietary (timing of last oral intake) and history, from whatever sources that might be available such as family, friends, and previous treating physicians. This method is sometimes given the mnemonic "SAMPLE". The amount of time spent on diagnosis should be minimized and expedited by a combination of clinical assessment and appropriate use of technology, such as diagnostic peritoneal lavage (DPL), or bedside ultrasound examination (FAST) before proceeding to laparotomy if required. If time and the patient's stability permit, a CT examination may be carried out if available. Its advantages include superior definition of the injury, leading to grading of the injury and sometimes the confidence to avoid or postpone surgery. Its disadvantages include the time taken to acquire images, although this gets shorter with each generation of scanners, and the removal of the patient from the immediate view of the emergency or surgical staff. Many providers use the aid of an algorithm such as the ATLS guidelines to determine which images to obtain following the initial assessment. These algorithms take into account the mechanism of injury, physical examination, and patient's vital signs to determine whether patients should have imaging or proceed directly to surgery.
In 2011, criteria were defined that might allow patients with blunt abdominal trauma to be discharged safely without further evaluation. The characteristics of such patients include:
absence of intoxication
no evidence of lowered blood pressure or raised pulse rate
no abdominal pain or tenderness
no blood in the urine.
To be considered low-risk, patients would need to meet all low-risk criteria.
Treatment
When blunt trauma is significant enough to require evaluation by a healthcare provider, treatment is typically aimed at treating life-threatening injuries, such as maintaining the patient's airway and preventing ongoing blood loss. Patients who have suffered blunt trauma and meet specific triage criteria have shown improved outcomes when they are cared for in a trauma center. The management of patients with blunt force trauma necessitates the collaboration of an interpersonal healthcare team, which may include but is not limited to; a trauma surgeon, emergency department physician, anesthesiologist, and emergency and trauma nursing staff.
Treatment of abdominal trauma
In cases of blunt abdominal injury, the most frequent damage occurs in the small intestines, and in severe situations, this can result in small intestine perforation. Perforation of the small or large intestines is a serious concern due to its tremendous infectious potential. In these cases, it is essential to perform exploratory surgery to assess the internal damage, drain infected fluid in the abdomen, and clean the wound with saline. Prophylactic antibiotics are often necessary. In the case of multiple holes or significant damage to the blood supply of the intestines, the affected segment of tissue may need to be removed entirely.
Treatment of blunt cranial trauma
The treatment of blunt cranial trauma is dependent on the extent of the injury. A discussion between the patient and healthcare professionals will take place in order to carefully assess the patient's condition and determine the best approach for treatment. When considering the management of cranial trauma, it is crucial to ensure that the patient can breathe effectively. Effective breathing can be monitored using the patient's blood oxygen content via a pulse oximeter. The goal is to maintain greater than 90% oxygen saturation in the blood. If the patient cannot maintain appropriate blood oxygen levels on their own, mechanical ventilation may be indicated. Mechanical ventilation will add oxygen and remove carbon dioxide in the blood. It is also critically important to avoid low blood pressure in the setting of traumatic brain injuries. Studies have demonstrated improved outcomes in patients with systolic blood pressure greater than or equal to 120mmHg. Lastly, healthcare professionals should conduct consecutive neurological examinations to allow for early identification of elevated intracranial pressure and subsequent implementation of interventions to improve blood flow and reduce stress to the body. Of note, patients taking anticoagulant or antiplatelet therapy during the time of blunt cranial trauma should undergo rapid reversal of anticoagulating agents.
Treatment of blunt thoracic trauma
Nine out of ten patients with thoracic trauma can be treated effectively without a surgical operation. If surgery is indicated, there are numerous options available. A comprehensive discussion between the patient and the surgeon will take place to carefully evaluate the best approach, tailored to the patient's specific condition and injury. Conservative measures such as maintaining a clear and open airway, oxygen support, tube thoracostomy, and volume resuscitation are often given to manage blunt thoracic trauma. Oftentimes, pain control is the most basic and effective treatment approach because the presence of severe pain may lead to impairment of proper breathing, further exacerbating impaired lungs. Pain management in thoracic trauma patients improves the ability to breathe properly on their own, encourages the excretion of pulmonary secretions, and decreases the aggravation of inflammation and low oxygen levels in the blood. Nonsteroidal anti-inflammatory drugs, opioids, or regional pain management methods, such as local anesthetic, can be used for pain control.
Epidemiology
Worldwide, a significant cause of disability and death in people under the age of 35 is trauma, of which most are due to blunt trauma.
| Biology and health sciences | Injury | null |
3728109 | https://en.wikipedia.org/wiki/Variable%20%28mathematics%29 | Variable (mathematics) | In mathematics, a variable (from Latin variabilis, "changeable") is a symbol, typically a letter, that refers to an unspecified mathematical object. One says colloquially that the variable represents or denotes the object, and that any valid candidate for the object is the value of the variable. The values a variable can take are usually of the same kind, often numbers. More specifically, the values involved may form a set, such as the set of real numbers.
The object may not always exist, or it might be uncertain whether any valid candidate exists or not. For example, one could represent two integers by the variables and and require that the value of the square of is twice the square of , which in algebraic notation can be written . A definitive proof that this relationship is impossible to satisfy when and are restricted to integer numbers isn't obvious, but it has been known since ancient times and has had a big influence on mathematics ever since.
Originally, the term "variable" was used primarily for the argument of a function, in which case its value can vary in the domain of the function. This is the motivation for the choice of the term. Also, variables are used for denoting values of functions, such as in .
A variable may represent an unspecified number that remains fixed during the resolution of a problem; in which case, it is often called a parameter. A variable may denote an unknown number that has to be determined; in which case, it is called an unknown; for example, in the quadratic equation , the variables , , are parameters, and is the unknown.
Sometimes the same symbol can be used to denote both a variable and a constant, that is a well defined mathematical object. For example, the Greek letter generally represents the number , but has also been used to denote a projection. Similarly, the letter often denotes Euler's number, but has been used to denote an unassigned coefficient for quartic function and higher degree polynomials. Even the symbol has been used to denote an identity element of an arbitrary field. These two notions are used almost identically, therefore one usually must be told whether a given symbol denotes a variable or a constant.
Variables are often used for representing matrices, functions, their arguments, sets and their elements, vectors, spaces, etc.
In mathematical logic, a variable is a symbol that either represents an unspecified constant of the theory, or is being quantified over.
History
Early history
The earliest uses of an "unknown quantity" date back to at least the Ancient Egyptians with the Moscow Mathematical Papyrus (c. 1500 BC) which described problems with unknowns rhetorically, called the "Aha problems". The "Aha problems" involve finding unknown quantities (referred to as aha, "stack") if the sum of the quantity and part(s) of it are given (The Rhind Mathematical Papyrus also contains four of these type of problems). For example, problem 19 asks one to calculate a quantity taken times and added to 4 to make 10. In modern mathematical notation: . Around the same time in Mesopotamia, mathematics of the Old Babylonian period (c. 2000 BC – 1500 BC) was more advanced, also studying quadratic and cubic equations.
In works of ancient greece such as Euclid's Elements (c. 300 BC), mathematics was described gemoetrically. For example, The Elements, proposition 1 of Book II, Euclid includes the proposition:
"If there be two straight lines, and one of them be cut into any number of segments whatever, the rectangle contained by the two straight lines is equal to the rectangles contained by the uncut straight line and each of the segments."
This corresponds to the algebraic identity (distributivity), but is described entirely geometrically. Euclid, and other greek geometers, also used single letters refer to geometric points and shapes. This kind of algebra is now sometimes called Greek geometric algebra.
Diophantus of Alexandria, pioneered a form of syncopated algebra in his Arithmetica (c. 200 AD), which introduced symbolic manipulation of expressions with unknowns and powers, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called . The square of was ; the cube was ; the fourth power was ; and the fifth power was . So for example, what would be written in modern notation as:
would be written in Diophantus's syncopated notation as:
In the 7th century BC, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. One section of this book is called "Equations of Several Colours". Greek and other ancient mathematical advances, were often trapped in long periods of stagnation, and so there were few revolutions in notation, but this began to change by the early modern period.
Early modern period
At the end of the 16th century, François Viète introduced the idea of representing known and unknown numbers by letters, nowadays called variables, and the idea of computing with them as if they were numbers—in order to obtain the result by a simple replacement. Viète's convention was to use consonants for known values, and vowels for unknowns.
In 1637, René Descartes "invented the convention of representing unknowns in equations by , , and , and knowns by , , and ". Contrarily to Viète's convention, Descartes' is still commonly in use. The history of the letter x in math was discussed in an 1887 Scientific American article.
Starting in the 1660s, Isaac Newton and Gottfried Wilhelm Leibniz independently developed the infinitesimal calculus, which essentially consists of studying how an infinitesimal variation of a time-varying quantity, called a Fluent, induces a corresponding variation of another quantity which is a function of the first variable. Almost a century later, Leonhard Euler fixed the terminology of infinitesimal calculus, and introduced the notation for a function , its variable and its value . Until the end of the 19th century, the word variable referred almost exclusively to the arguments and the values of functions.
In the second half of the 19th century, it appeared that the foundation of infinitesimal calculus was not formalized enough to deal with apparent paradoxes such as a nowhere differentiable continuous function. To solve this problem, Karl Weierstrass introduced a new formalism consisting of replacing the intuitive notion of limit by a formal definition. The older notion of limit was "when the variable varies and tends toward , then tends toward ", without any accurate definition of "tends". Weierstrass replaced this sentence by the formula
in which none of the five variables is considered as varying.
This static formulation led to the modern notion of variable, which is simply a symbol representing a mathematical object that either is unknown, or may be replaced by any element of a given set (e.g., the set of real numbers).
Notation
Variables are generally denoted by a single letter, most often from the Latin alphabet and less often from the Greek, which may be lowercase or capitalized. The letter may be followed by a subscript: a number (as in ), another variable (), a word or abbreviation of a word as a label () or a mathematical expression (). Under the influence of computer science, some variable names in pure mathematics consist of several letters and digits. Following René Descartes (1596–1650), letters at the beginning of the alphabet such as , , are commonly used for known values and parameters, and letters at the end of the alphabet such as , , are commonly used for unknowns and variables of functions. In printed mathematics, the norm is to set variables and constants in an italic typeface.
For example, a general quadratic function is conventionally written as , where , and are parameters (also called constants, because they are constant functions), while is the variable of the function. A more explicit way to denote this function is , which clarifies the function-argument status of and the constant status of , and . Since occurs in a term that is a constant function of , it is called the constant term.
Specific branches and applications of mathematics have specific naming conventions for variables. Variables with similar roles or meanings are often assigned consecutive letters or the same letter with different subscripts. For example, the three axes in 3D coordinate space are conventionally called , , and . In physics, the names of variables are largely determined by the physical quantity they describe, but various naming conventions exist. A convention often followed in probability and statistics is to use , , for the names of random variables, keeping , , for variables representing corresponding better-defined values.
Conventional variable names
, , , (sometimes extended to , ) for parameters or coefficients
, , , ... for situations where distinct letters are inconvenient
or for the th term of a sequence or the th coefficient of a series
, , for functions (as in )
, , (sometimes or ) for varying integers or indices in an indexed family, or unit vectors
and for the length and width of a figure
also for a line, or in number theory for a prime number not equal to
(with as a second choice) for a fixed integer, such as a count of objects or the degree of a polynomial
for a prime number or a probability
for a prime power or a quotient
for a radius, a remainder or a correlation coefficient
for time
, , for the three Cartesian coordinates of a point in Euclidean geometry or the corresponding axes
for a complex number, or in statistics a normal random variable
, , , , for angle measures
(with as a second choice) for an arbitrarily small positive number
for an eigenvalue
(capital sigma) for a sum, or (lowercase sigma) in statistics for the standard deviation
for a mean
Specific kinds of variables
It is common for variables to play different roles in the same mathematical formula, and names or qualifiers have been introduced to distinguish them. For example, the general cubic equation
is interpreted as having five variables: four, , which are taken to be given numbers and the fifth variable, is understood to be an unknown number. To distinguish them, the variable is called an unknown, and the other variables are called parameters or coefficients, or sometimes constants, although this last terminology is incorrect for an equation, and should be reserved for the function defined by the left-hand side of this equation.
In the context of functions, the term variable refers commonly to the arguments of the functions. This is typically the case in sentences like "function of a real variable", " is the variable of the function ", " is a function of the variable " (meaning that the argument of the function is referred to by the variable ).
In the same context, variables that are independent of define constant functions and are therefore called constant. For example, a constant of integration is an arbitrary constant function that is added to a particular antiderivative to obtain the other antiderivatives. Because of the strong relationship between polynomials and polynomial functions, the term "constant" is often used to denote the coefficients of a polynomial, which are constant functions of the indeterminates.
Other specific names for variables are:
An unknown is a variable in an equation which has to be solved for.
An indeterminate is a symbol, commonly called variable, that appears in a polynomial or a formal power series. Formally speaking, an indeterminate is not a variable, but a constant in the polynomial ring or the ring of formal power series. However, because of the strong relationship between polynomials or power series and the functions that they define, many authors consider indeterminates as a special kind of variables.
A parameter is a quantity (usually a number) which is a part of the input of a problem, and remains constant during the whole solution of this problem. For example, in mechanics the mass and the size of a solid body are parameters for the study of its movement. In computer science, parameter has a different meaning and denotes an argument of a function.
Free variables and bound variables
A random variable is a kind of variable that is used in probability theory and its applications.
All these denominations of variables are of semantic nature, and the way of computing with them (syntax) is the same for all.
Dependent and independent variables
In calculus and its application to physics and other sciences, it is rather common to consider a variable, say , whose possible values depend on the value of another variable, say . In mathematical terms, the dependent variable represents the value of a function of . To simplify formulas, it is often useful to use the same symbol for the dependent variable and the function mapping onto . For example, the state of a physical system depends on measurable quantities such as the pressure, the temperature, the spatial position, ..., and all these quantities vary when the system evolves, that is, they are function of the time. In the formulas describing the system, these quantities are represented by variables which are dependent on the time, and thus considered implicitly as functions of the time.
Therefore, in a formula, a dependent variable is a variable that is implicitly a function of another (or several other) variables. An independent variable is a variable that is not dependent.
The property of a variable to be dependent or independent depends often of the point of view and is not intrinsic. For example, in the notation , the three variables may be all independent and the notation represents a function of three variables. On the other hand, if and depend on (are dependent variables) then the notation represents a function of the single independent variable .
Examples
If one defines a function from the real numbers to the real numbers by
then x is a variable standing for the argument of the function being defined, which can be any real number.
In the identity
the variable is a summation variable which designates in turn each of the integers (it is also called index because its variation is over a discrete set of values) while is a parameter (it does not vary within the formula).
In the theory of polynomials, a polynomial of degree 2 is generally denoted as , where , and are called coefficients (they are assumed to be fixed, i.e., parameters of the problem considered) while is called a variable. When studying this polynomial for its polynomial function this stands for the function argument. When studying the polynomial as an object in itself, is taken to be an indeterminate, and would often be written with a capital letter instead to indicate this status.
Example: the ideal gas law
Consider the equation describing the ideal gas law,
This equation would generally be interpreted to have four variables, and one constant. The constant is }, the Boltzmann constant. One of the variables, , the number of particles, is a positive integer (and therefore a discrete variable), while the other three, , and , for pressure, volume and temperature, are continuous variables.
One could rearrange this equation to obtain as a function of the other variables,
Then , as a function of the other variables, is the dependent variable, while its arguments, , and , are independent variables. One could approach this function more formally and think about its domain and range: in function notation, here is a function .
However, in an experiment, in order to determine the dependence of pressure on a single one of the independent variables, it is necessary to fix all but one of the variables, say . This gives a function
where now and are also regarded as constants. Mathematically, this constitutes a partial application of the earlier function .
This illustrates how independent variables and constants are largely dependent on the point of view taken. One could even regard as a variable to obtain a function
Moduli spaces
Considering constants and variables can lead to the concept of moduli spaces. For illustration, consider the equation for a parabola,
where , , , and are all considered to be real. The set of points in the 2D plane satisfying this equation trace out the graph of a parabola. Here, , and are regarded as constants, which specify the parabola, while and are variables.
Then instead regarding , and as variables, we observe that each set of 3-tuples corresponds to a different parabola. That is, they specify coordinates on the 'space of parabolas': this is known as a moduli space of parabolas.
| Mathematics | Algebra | null |
3728323 | https://en.wikipedia.org/wiki/Type%20Ia%20supernova | Type Ia supernova | A Type Ia supernova (read: "type one-A") is a type of supernova that occurs in binary systems (two stars orbiting one another) in which one of the stars is a white dwarf. The other star can be anything from a giant star to an even smaller white dwarf.
Physically, carbon–oxygen white dwarfs with a low rate of rotation are limited to below 1.44 solar masses (). Beyond this "critical mass", they reignite and in some cases trigger a supernova explosion; this critical mass is often referred to as the Chandrasekhar mass, but is marginally different from the absolute Chandrasekhar limit, where electron degeneracy pressure is unable to prevent catastrophic collapse. If a white dwarf gradually accretes mass from a binary companion, or merges with a second white dwarf, the general hypothesis is that a white dwarf's core will reach the ignition temperature for carbon fusion as it approaches the Chandrasekhar mass. Within a few seconds of initiation of nuclear fusion, a substantial fraction of the matter in the white dwarf undergoes a runaway reaction, releasing enough energy () to unbind the star in a supernova explosion.
The Type Ia category of supernova produces a fairly consistent peak luminosity because of the fixed critical mass at which a white dwarf will explode. Their consistent peak luminosity allows these explosions to be used as standard candles to measure the distance to their host galaxies: the visual magnitude of a type Ia supernova, as observed from Earth, indicates its distance from Earth.
Consensus model
The Type Ia supernova is a subcategory in the Minkowski–Zwicky supernova classification scheme, which was devised by German-American astronomer Rudolph Minkowski and Swiss astronomer Fritz Zwicky. There are several means by which a supernova of this type can form, but they share a common underlying mechanism. Theoretical astronomers long believed the progenitor star for this type of supernova is a white dwarf, and empirical evidence for this was found in 2014 when a Type Ia supernova was observed in the galaxy Messier 82. When a slowly-rotating carbon–oxygen white dwarf accretes matter from a companion, it can exceed the Chandrasekhar limit of about , beyond which it can no longer support its weight with electron degeneracy pressure. In the absence of a countervailing process, the white dwarf would collapse to form a neutron star, in an accretion-induced non-ejective process, as normally occurs in the case of a white dwarf that is primarily composed of magnesium, neon, and oxygen.
The current view among astronomers who model Type Ia supernova explosions, however, is that this limit is never actually attained and collapse is never initiated. Instead, the increase in pressure and density due to the increasing weight raises the temperature of the core, and as the white dwarf approaches about 99% of the limit, a period of convection ensues, lasting approximately 1,000 years. At some point in this simmering phase, a deflagration flame front is born, powered by carbon fusion. The details of the ignition are still unknown, including the location and number of points where the flame begins. Oxygen fusion is initiated shortly thereafter, but this fuel is not consumed as completely as carbon.
Once fusion begins, the temperature of the white dwarf increases. A main sequence star supported by thermal pressure can expand and cool which automatically regulates the increase in thermal energy. However, degeneracy pressure is independent of temperature; white dwarfs are unable to regulate temperature in the manner of normal stars, so they are vulnerable to runaway fusion reactions. The flare accelerates dramatically, in part due to the Rayleigh–Taylor instability and interactions with turbulence. It is still a matter of considerable debate whether this flare transforms into a supersonic detonation from a subsonic deflagration.
Regardless of the exact details of how the supernova ignites, it is generally accepted that a substantial fraction of the carbon and oxygen in the white dwarf fuses into heavier elements within a period of only a few seconds, with the accompanying release of energy increasing the internal temperature to billions of degrees. The energy released (1–) is more than sufficient to unbind the star; that is, the individual particles making up the white dwarf gain enough kinetic energy to fly apart from each other. The star explodes violently and releases a shock wave in which matter is typically ejected at speeds on the order of , roughly 6% of the speed of light. The energy released in the explosion also causes an extreme increase in luminosity. The typical visual absolute magnitude of Type Ia supernovae is Mv = −19.3 (about 5 billion times brighter than the Sun), with little variation. The Type Ia supernova leaves no compact remnant, but the whole mass of the former white dwarf dissipates through space.
The theory of this type of supernova is similar to that of novae, in which a white dwarf accretes matter more slowly and does not approach the Chandrasekhar limit. In the case of a nova, the infalling matter causes a hydrogen fusion surface explosion that does not disrupt the star.
Type Ia supernovae differ from Type II supernovae, which are caused by the cataclysmic explosion of the outer layers of a massive star as its core collapses, powered by release of gravitational potential energy via neutrino emission.
Formation
Single degenerate progenitors
One model for the formation of this category of supernova is a close binary star system. The progenitor binary system consists of main sequence stars, with the primary possessing more mass than the secondary. Being greater in mass, the primary is the first of the pair to evolve onto the asymptotic giant branch, where the star's envelope expands considerably. If the two stars share a common envelope then the system can lose significant amounts of mass, reducing the angular momentum, orbital radius and period. After the primary has degenerated into a white dwarf, the secondary star later evolves into a red giant and the stage is set for mass accretion onto the primary. During this final shared-envelope phase, the two stars spiral in closer together as angular momentum is lost. The resulting orbit can have a period as brief as a few hours. If the accretion continues long enough, the white dwarf may eventually approach the Chandrasekhar limit.
The white dwarf companion could also accrete matter from other types of companions, including a subgiant or (if the orbit is sufficiently close) even a main sequence star. The actual evolutionary process during this accretion stage remains uncertain, as it can depend both on the rate of accretion and the transfer of angular momentum to the white dwarf companion.
It has been estimated that single degenerate progenitors account for no more than 20% of all Type Ia supernovae.
Double degenerate progenitors
A second possible mechanism for triggering a Type Ia supernova is the merger of two white dwarfs whose combined mass exceeds the Chandrasekhar limit. The resulting merger is called a super-Chandrasekhar mass white dwarf. In such a case, the total mass would not be constrained by the Chandrasekhar limit.
Collisions of solitary stars within the Milky Way occur only once every to ; far less frequently than the appearance of novae. Collisions occur with greater frequency in the dense core regions of globular clusters (cf. blue stragglers). A likely scenario is a collision with a binary star system, or between two binary systems containing white dwarfs. This collision can leave behind a close binary system of two white dwarfs. Their orbit decays and they merge through their shared envelope. A study based on SDSS spectra found 15 double systems of the 4,000 white dwarfs tested, implying a double white dwarf merger every 100 years in the Milky Way: this rate matches the number of Type Ia supernovae detected in our neighborhood.
A double degenerate scenario is one of several explanations proposed for the anomalously massive () progenitor of SN 2003fg. It is the only possible explanation for SNR 0509-67.5, as all possible models with only one white dwarf have been ruled out. It has also been strongly suggested for SN 1006, given that no companion star remnant has been found there. Observations made with NASA's Swift space telescope ruled out existing supergiant or giant companion stars of every Type Ia supernova studied. The supergiant companion's blown out outer shell should emit X-rays, but this glow was not detected by Swift's XRT (X-ray telescope) in the 53 closest supernova remnants. For 12 Type Ia supernovae observed within 10 days of the explosion, the satellite's UVOT (ultraviolet/optical telescope) showed no ultraviolet radiation originating from the heated companion star's surface hit by the supernova shock wave, meaning there were no red giants or larger stars orbiting those supernova progenitors. In the case of SN 2011fe, the companion star must have been smaller than the Sun, if it existed. The Chandra X-ray Observatory revealed that the X-ray radiation of five elliptical galaxies and the bulge of the Andromeda Galaxy is 30–50 times fainter than expected. X-ray radiation should be emitted by the accretion discs of Type Ia supernova progenitors. The missing radiation indicates that few white dwarfs possess accretion discs, ruling out the common, accretion-based model of Ia supernovae. Inward spiraling white dwarf pairs are strongly-inferred candidate sources of gravitational waves, although they have not been directly observed.
Double degenerate scenarios raise questions about the applicability of Type Ia supernovae as standard candles, since total mass of the two merging white dwarfs varies significantly, meaning luminosity also varies.
Type Iax
It has been proposed that a group of sub-luminous supernovae should be classified as Type Iax. This type of supernova may not always completely destroy the white dwarf progenitor, but instead leave behind a zombie star. Known examples of type Iax supernovae include: the historical supernova SN 1181, SN 1991T, SN 1991bg, SN 2002cx, and SN 2012Z.
The supernova SN 1181 is believed to be associated with the supernova remnant Pa 30 and its central star IRAS 00500+6713, which is the result of a merger of a CO white dwarf and an ONe white dwarf. This makes Pa 30 and IRAS 00500+6713 the only SN Iax remnant in the Milky Way.
Observation
Unlike the other types of supernovae, Type Ia supernovae generally occur in all types of galaxies, including ellipticals. They show no preference for regions of current stellar formation. As white dwarf stars form at the end of a star's main sequence evolutionary period, such a long-lived star system may have wandered far from the region where it originally formed. Thereafter a close binary system may spend another million years in the mass transfer stage (possibly forming persistent nova outbursts) before the conditions are ripe for a Type Ia supernova to occur.
A long-standing problem in astronomy has been the identification of supernova progenitors. Direct observation of a progenitor would provide useful constraints on supernova models. As of 2006, the search for such a progenitor had been ongoing for longer than a century. Observation of the supernova SN 2011fe has provided useful constraints. Previous observations with the Hubble Space Telescope did not show a star at the position of the event, thereby excluding a red giant as the source. The expanding plasma from the explosion was found to contain carbon and oxygen, making it likely the progenitor was a white dwarf primarily composed of these elements.
Similarly, observations of the nearby SN PTF 11kx, discovered January 16, 2011 (UT) by the Palomar Transient Factory (PTF), lead to the conclusion that this explosion arises from single-degenerate progenitor, with a red giant companion, thus suggesting there is no single progenitor path to SN Ia. Direct observations of the progenitor of PTF 11kx were reported in the August 24 edition of Science and support this conclusion, and also show that the progenitor star experienced periodic nova eruptions before the supernova – another surprising discovery.
However, later analysis revealed that the circumstellar material is too massive for the single-degenerate scenario, and fits better the core-degenerate scenario.
In May 2015, NASA reported that the Kepler space observatory observed KSN 2011b, a Type Ia supernova in the process of exploding. Details of the pre-nova moments may help scientists better judge the quality of Type Ia supernovae as standard candles, which is an important link in the argument for dark energy.
In July 2019, the Hubble Space Telescope took three images of a Type Ia supernova through a gravitational lens. This supernova appeared at three different times in the evolution of its brightness due to the differing path length of the light in the three images; at −24, 92, and 107 days from peak luminosity. A fourth image will appear in 2037 allowing observation of the entire luminosity cycle of the supernova.
Light curve
Type Ia supernovae have a characteristic light curve, their graph of luminosity as a function of time after the explosion. Near the time of maximal luminosity, the spectrum contains lines of intermediate-mass elements from oxygen to calcium; these are the main constituents of the outer layers of the star. Months after the explosion, when the outer layers have expanded to the point of transparency, the spectrum is dominated by light emitted by material near the core of the star, heavy elements synthesized during the explosion; most prominently isotopes close to the mass of iron (iron-peak elements). The radioactive decay of nickel-56 through cobalt-56 to iron-56 produces high-energy photons, which dominate the energy output of the ejecta at intermediate to late times.
The use of Type Ia supernovae to measure precise distances was pioneered by a collaboration of Chilean and US astronomers, the Calán/Tololo Supernova Survey. In a series of papers in the 1990s the survey showed that while Type Ia supernovae do not all reach the same peak luminosity, a single parameter measured from the light curve can be used to correct unreddened Type Ia supernovae to standard candle values. The original correction to standard candle value is known as the Phillips relationship
and was shown by this group to be able to measure relative distances to 7% accuracy. The cause of this uniformity in peak brightness is related to the amount of nickel-56 produced in white dwarfs presumably exploding near the Chandrasekhar limit.
The similarity in the absolute luminosity profiles of nearly all known Type Ia supernovae has led to their use as a secondary standard candle in extragalactic astronomy.
Improved calibrations of the Cepheid variable distance scale and direct geometric distance measurements to NGC 4258 from the dynamics of maser emission
when combined with the Hubble diagram of the Type Ia supernova distances have led to an improved value of the Hubble constant.
In 1998, observations of distant Type Ia supernovae indicated the unexpected result that the universe seems to undergo an accelerating expansion.
Three members from two teams were subsequently awarded Nobel Prizes for this discovery.
Subtypes
There is significant diversity within the class of Type Ia supernovae. Reflecting this, a plethora of sub-classes have been identified. Two prominent and well-studied examples include 1991T-likes, an overluminous subclass that exhibits particularly strong iron absorption lines and abnormally small silicon features, and 1991bg-likes, an exceptionally dim subclass characterized by strong early titanium absorption features and rapid photometric and spectral evolution. Despite their abnormal luminosities, members of both peculiar groups can be standardized by use of the Phillips relation, defined at blue wavelengths, to determine distance.
| Physical sciences | Stellar astronomy | Astronomy |
6647231 | https://en.wikipedia.org/wiki/Sorghum | Sorghum | Sorghum bicolor, commonly called sorghum () and also known as great millet, broomcorn, guinea corn, durra, imphee, jowar, or milo, is a species in the grass genus Sorghum cultivated for its grain. The grain is used as food by humans, while the plant is used for animal feed and ethanol production. Sorghum originated in Africa, and is now cultivated widely in tropical and subtropical regions.
Sorghum is the world's fifth-most important cereal crop after rice, wheat, maize, and barley. Sorghum is typically an annual, but some cultivars are perennial. It grows in clumps that may reach over high. The grain is small, in diameter. Sweet sorghums are cultivars primarily grown for forage, syrup production, and ethanol. They are taller than those grown for grain.
Description
Sorghum is a large stout grass that grows up to tall. It has large bushy flowerheads or panicles that provide an edible starchy grain with up to 3,000 seeds in each flowerhead. It grows in warm climates worldwide for food and forage. Sorghum is native to Africa with many cultivated forms. Most production uses annual cultivars, but some wild species of Sorghum are perennial, which may enable the Land Institute to develop a perennial cultivar for "repeated, sufficient grain harvests without resowing."
Evolution
Phylogeny
Sorghum is closely related to maize and the millets within the PACMAD clade of grasses, and more distantly to the cereals of the BOP clade such as wheat and barley.
History
Domestication
S. bicolor was domesticated from its wild ancestor more than 5,000 years ago in Eastern Sudan in the area of the Rivers Atbara and Gash. It has been found at an archaeological site near Kassala in eastern Sudan, dating from 3500 to 3000 BC, and is associated with the neolithic Butana Group culture. Sorghum bread from graves in Predynastic Egypt, some 5,100 years ago, is displayed in the Egyptian Museum, Turin, Italy.
The first race to be domesticated was bicolor; it had tight husks that had to be removed forcibly. Around 4,000 years ago, this spread to the Indian subcontinent; around 3,000 years ago it reached West Africa. Four other races evolved through cultivation to have larger grains and to become free-threshing, making harvests easier and more productive. These were caudatum in the Sahel; durra, most likely in India; guinea in West Africa (later reaching India), and from that race mageritiferum that gave rise to the varieties of Southern Africa.
Spread
In the Middle Ages, the Arab Agricultural Revolution spread sorghum and other crops from Africa and Asia across the Arab world as far as Al-Andalus in Spain. Sorghum remained the staple food of the medieval kingdom of Alodia and most Sub-Saharan cultures prior to European colonialism.
Tall varieties of sorghum with a high sugar content are called sweet sorghum; these are useful for producing a sugar-rich syrup and as forage. Sweet sorghum was important to the sugar trade in the 19th century. The price of sugar was rising because of decreased production in the British West Indies and more demand for confectionery and fruit preserves, and the United States was actively searching for a sugar plant that could be produced in northern states. The "Chinese sugar-cane", sweet sorghum, was viewed as a plant that would be productive in the West Indies.
The name sorghum derives from Italian sorgo, which in turn most likely comes from 12th century Medieval Latin surgum or suricum. This in turn may be from Latin syricum, meaning "[grass] of Syria".
Cultivation
Agronomy
Most varieties of sorghum are drought- and heat-tolerant, nitrogen-efficient, and are grown particularly in arid and semi-arid regions where the grain is one of the staples for poor and rural people. These varieties provide forage in many tropical regions. S. bicolor is a food crop in Africa, Central America, and South Asia, and is the fifth most common cereal crop grown in the world. It is most often grown without application of fertilizers or other inputs by small-holder farmers in developing countries. They benefit from sorghum's ability to compete effectively with weeds, especially when it is planted in narrow rows. Sorghum actively suppresses weeds by producing sorgoleone, an alkylresorcinol.
Sorghum grows in a wide range of temperatures. It can tolerate high altitude and toxic soils, and can recover growth after some drought. Optimum growth temperature range is , and the growing season lasts for around 115–140 days. It can grow on a wide range of soils, such as heavy clay to sandy soils with the pH tolerance ranging from 5.0 to 8.5. It requires an arable field that has been left fallow for at least two years or where crop rotation with legumes has taken place in the previous year. Diversified 2- or 4-year crop rotation can improve sorghum yield, additionally making it more resilient to inconsistent growth conditions. In terms of nutrient requirements, sorghum is comparable to other cereal grain crops with nitrogen, phosphorus, and potassium required for growth.
The International Crops Research Institute for the Semi-Arid Tropics has improved sorghum using traditional genetic improvement and integrated genetic and natural resources management practices. Some 194 improved cultivars are now planted worldwide. In India, increases in sorghum productivity resulting from improved cultivars have freed up of land, enabling farmers to diversify into high-income cash crops and boost their livelihoods. Sorghum is used primarily as poultry feed, and secondarily as cattle feed and in brewing applications.
Pests and diseases
Insect damage is a major threat to sorghum plants. Over 150 species damage crop plants at different stages of development, resulting in significant biomass loss. Stored sorghum grain is attacked by other insect pests such as the lesser grain borer beetle.
Sorghum is a host of the parasitic plant Striga hermonthica, purple witchweed; that can reduce production.
Sorghum is subject to a variety of plant pathogens. The fungus Colletotrichum sublineolum causes anthracnose.
The toxic ergot fungus parasitises the grain, risking harm to humans and livestock.
Sorghum produces chitinases as defensive compounds against fungal diseases. Transgenesis of additional chitinases increases the crop's disease resistance.
Genetics and genomics
The genome of S. bicolor was sequenced between 2005 and 2007. It is generally considered diploid and contains 20 chromosomes, however, there is evidence to suggest a tetraploid origin for S. bicolor. The genome size is approximately 800 Mbp.
Paterson et al., 2009 provides a genome assembly of 739 megabase. The most commonly used genome database is maintained by Luo et al., 2016. A gene expression atlas is available from Shakoor et al., 2014 with 27,577 genes. For molecular breeding (or other purposes) an SNP array has been created by Bekele et al., 2013, a 3K SNP Infinium from Illumina, Inc.
Agrobacterium transformation can be used on sorghum, as shown in a 2018 report of such a transformation system. A 2013 study developed and validated an SNP array for molecular breeding.
Production
In 2021, world production of sorghum was 61 million tonnes, led by the United States with 19% of the total (table). India, Ethiopia, and Mexico were the largest secondary producers.
International trade
In 2013, China began purchasing American sorghum as a complementary livestock feed to its domestically grown maize. It imported around $1 billion worth per year until April 2018, when it imposed retaliatory tariffs as part of a trade war. By 2020, the tariffs had been waived, and trade volumes increased before declining again as China began buying sorghum from other countries. As of 2020, China is the world's largest sorghum importer, importing more than all other countries combined. Mexico also accounts for 7% of global sorghum production.
Nutrition
The grain is edible and nutritious. It can be eaten raw when young and milky, but has to be boiled or ground into flour when mature.
Sorghum grain is 72% carbohydrates including 7% dietary fiber, 11% protein, 3% fat, and 12% water (table). In a reference amount of , sorghum grain supplies 79 calories and rich contents (20% or more of the Daily Value, DV) of several B vitamins and dietary minerals (table).
In the early stages of plant growth, some sorghum species may contain levels of hydrogen cyanide, hordenine, and nitrates lethal to grazing animals. Plants stressed by drought or heat can also contain toxic levels of cyanide and nitrates at later stages in growth.
Use
Food and drink
Sorghum is widely used for food and animal fodder. It is also used to make alcoholic beverages. It can be made into couscous, porridge, or flatbreads such as Indian Jōḷada roṭṭi or tortillas; and it can be burst in hot oil to make a popcorn, smaller than that of maize. Since it does not contain gluten, it can be used in gluten-free diets.
In South Africa, characteristically sour malwa beer is made from sorghum or millet. The process involves souring the mashed grain with lactic acid bacteria, followed by fermenting by the wild yeasts that were on the grain.
In China and Taiwan, sorghum is one of the main materials of Kaoliang liquor, a type of the colourless distilled alcoholic drink Baijiu.
In countries including the US, the stalks of sweet sorghum varieties are crushed in a cane juicer to extract the sweet molasses-like juice. The juice is sold as syrup, and used as a feedstock to make biofuel.
Biofuel
Sorghum can be used to produce fuel ethanol as an alternative to maize. The energy ratio for the production of ethanol is similar to that of sugarcane, and much higher than that of maize. Extracted carbohydrates can readily be fermented into ethanol because of their simple sugar structure. Residuals contain enough energy to power the ethanol processing facilities used to produce the fuel. As of 2018, production costs (including price of produce, transport and processing costs) are competitive with maize, while sorghum has a lower nitrogen fertilizer requirement than maize.
To turn it into fuel ethanol, sorghum juice is concentrated into syrup for long term storage, then fermented in a batch fermentation process.
Other uses
In Nigeria, the pulverized red leaf-sheaths of sorghum have been used to dye leather, while in Algeria, sorghum has been used to dye wool.
In India, the panicle stalks are used as bristles for brooms.
Sorghum seeds and bagasse have the potential to produce lactic acid via fermentation which can be used to make polylactic acid, a biodegradable thermoplastic resin.
In human culture
In Australia, sorghum is personified as a spirit among the Dagoman people of Northern Territory, as well as being used for food; the local species are S. intrans and S. plumosum.
In Korea, the origin tale "Brother and sister who became the Sun and Moon" is also called "The reason sorghum is red". In the tale, a tiger who is chasing a brother and sister follows them up a rotten rope as they climb into the sky, and become the sun and moon. The rope breaks, and the tiger falls to its death, impaling itself on a sorghum stalk, which becomes red with its blood.
In Northeastern Italy in the early modern period, sticks of sorghum were used by Benandanti visionaries of the Friuli district to fight off witches who were thought to threaten crops and people.
| Biology and health sciences | Poales | null |
2766531 | https://en.wikipedia.org/wiki/Continuous%20distillation | Continuous distillation | Continuous distillation, a form of distillation, is an ongoing separation in which a mixture is continuously (without interruption) fed into the process and separated fractions are removed continuously as output streams. Distillation is the separation or partial separation of a liquid feed mixture into components or fractions by selective boiling (or evaporation) and condensation. The process produces at least two output fractions. These fractions include at least one volatile distillate fraction, which has boiled and been separately captured as a vapor condensed to a liquid, and practically always a bottoms (or residuum) fraction, which is the least volatile residue that has not been separately captured as a condensed vapor.
An alternative to continuous distillation is batch distillation, where the mixture is added to the unit at the start of the distillation, distillate fractions are taken out sequentially in time (one after another) during the distillation, and the remaining bottoms fraction is removed at the end. Because each of the distillate fractions are taken out at different times, only one distillate exit point (location) is needed for a batch distillation and the distillate can just be switched to a different receiver, a fraction-collecting container. Batch distillation is often used when smaller quantities are distilled. In a continuous distillation, each of the fraction streams is taken simultaneously throughout operation; therefore, a separate exit point is needed for each fraction. In practice when there are multiple distillate fractions, the distillate exit points are located at different heights on a fractionating column. The bottoms fraction can be taken from the bottom of the distillation column or unit, but is often taken from a reboiler connected to the bottom of the column.
Each fraction may contain one or more components (types of chemical compounds). When distilling crude oil or a similar feedstock, each fraction contains many components of similar volatility and other properties. Although it is possible to run a small-scale or laboratory continuous distillation, most often continuous distillation is used in a large-scale industrial process.
Industrial application
Distillation is one of the unit operations of chemical engineering and food engineering. Continuous distillation is used widely in the chemical process industries where large quantities of liquids have to be distilled. Such industries are the natural gas processing, petrochemical production, coal tar processing, liquor production, liquified air separation, hydrocarbon solvents production, cannabinoid separation and similar industries, but it finds its widest application in petroleum refineries. In such refineries, the crude oil feedstock is a very complex multicomponent mixture that must be separated and yields of pure chemical compounds are not expected, only groups of compounds within a relatively small range of boiling points, which are called fractions. These fractions are the origin of the term fractional distillation or fractionation. It is often not worthwhile separating the components in these fractions any further based on product requirements and economics.
Industrial distillation is typically performed in large, vertical cylindrical columns (as shown in images 1 and 2) known as "distillation towers" or "distillation columns" with diameters ranging from about 65 centimeters to 11 meters and heights ranging from about 6 meters to 60 meters or more.
Principle
The principle for continuous distillation is the same as for normal distillation: when a liquid mixture is heated so that it boils, the composition of the vapor above the liquid differs from the liquid composition. If this vapor is then separated and condensed into a liquid, it becomes richer in the lower boiling point component(s) of the original mixture.
This is what happens in a continuous distillation column. A mixture is heated up, and routed into the distillation column. On entering the column, the feed starts flowing down but part of it, the component(s) with lower boiling point(s), vaporizes and rises. However, as it rises, it cools and while part of it continues up as vapor, some of it (enriched in the less volatile component) begins to descend again.
Image 3 depicts a simple continuous fractional distillation tower for separating a feed stream into two fractions, an overhead distillate product and a bottoms product. The "lightest" products (those with the lowest boiling point or highest volatility) exit from the top of the columns and the "heaviest" products (the bottoms, those with the highest boiling point) exit from the bottom of the column. The overhead stream may be cooled and condensed using a water-cooled or air-cooled condenser. The bottoms reboiler may be a steam-heated or hot oil-heated heat exchanger, or even a gas or oil-fired furnace.
In a continuous distillation, the system is kept in a steady state or approximate steady state. Steady state means that quantities related to the process do not change as time passes during operation. Such constant quantities include feed input rate, output stream rates, heating and cooling rates, reflux ratio, and temperatures, pressures, and compositions at every point (location). Unless the process is disturbed due to changes in feed, heating, ambient temperature, or condensing, steady state is normally maintained. This is also the main attraction of continuous distillation, apart from the minimum amount of (easily instrumentable) surveillance; if the feed rate and feed composition are kept constant, product rate and quality are also constant. Even when a variation in conditions occurs, modern process control methods are commonly able to gradually return the continuous process to another steady state again.
Since a continuous distillation unit is fed constantly with a feed mixture and not filled all at once like a batch distillation, a continuous distillation unit does not need a sizable distillation pot, vessel, or reservoir for a batch fill. Instead, the mixture can be fed directly into the column, where the actual separation occurs. The height of the feed point along the column can vary on the situation and is designed so as to provide optimal results. See McCabe–Thiele method.
A continuous distillation is often a fractional distillation and can be a vacuum distillation or a steam distillation.
Design and operation
Design and operation of a distillation column depends on the feed and desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used to assist in the design. For a multi-component feed, computerized simulation models are used both for design and subsequently in operation of the column as well. Modeling is also used to optimize already erected columns for the distillation of mixtures other than those the distillation equipment was originally designed for.
When a continuous distillation column is in operation, it has to be closely monitored for changes in feed composition, operating temperature and product composition. Many of these tasks are performed using advanced computer control equipment.
Column feed
The column can be fed in different ways. If the feed is from a source at a pressure higher than the distillation column pressure, it is simply piped into the column. Otherwise, the feed is pumped or compressed into the column. The feed may be a superheated vapor, a saturated vapor, a partially vaporized liquid-vapor mixture, a saturated liquid (i.e., liquid at its boiling point at the column's pressure), or a sub-cooled liquid. If the feed is a liquid at a much higher pressure than the column pressure and flows through a pressure let-down valve just ahead of the column, it will immediately expand and undergo a partial flash vaporization resulting in a liquid-vapor mixture as it enters the distillation column.
Improving separation
Although small size units, mostly made of glass, can be used in laboratories, industrial units are large, vertical, steel vessels (see images 1 and 2) known as "distillation towers" or "distillation columns". To improve the separation, the tower is normally provided inside with horizontal plates or trays as shown in image 5, or the column is packed with a packing material. To provide the heat required for the vaporization involved in distillation and also to compensate for heat loss, heat is most often added to the bottom of the column by a reboiler, and the purity of the top product can be improved by recycling some of the externally condensed top product liquid as reflux. Depending on their purpose, distillation columns may have liquid outlets at intervals up the length of the column as shown in image 4.
Reflux
Large-scale industrial fractionation towers use reflux to achieve more efficient separation of products. Reflux refers to the portion of the condensed overhead liquid product from a distillation tower that is returned to the upper part of the tower as shown in images 3 and 4. Inside the tower, the downflowing reflux liquid provides cooling and partial condensation of the upflowing vapors, thereby increasing the efficacy of the distillation tower. The more reflux that is provided, the better is the tower's separation of the lower boiling from the higher boiling components of the feed. A balance of heating with a reboiler at the bottom of a column and cooling by condensed reflux at the top of the column maintains a temperature gradient (or gradual temperature difference) along the height of the column to provide good conditions for fractionating the feed mixture. Reflux flows at the middle of the tower are called pumparounds.
Changing the reflux (in combination with changes in feed and product withdrawal) can also be used to improve the separation properties of a continuous distillation column while in operation (in contrast to adding plates or trays, or changing the packing, which would, at a minimum, require quite significant downtime).
Plates or trays
Distillation towers (such as in images 3 and 4) use various vapor and liquid contacting methods to provide the required number of equilibrium stages. Such devices are commonly known as "plates" or "trays". Each of these plates or trays is at a different temperature and pressure. The stage at the tower bottom has the highest pressure and temperature. Progressing upwards in the tower, the pressure and temperature decreases for each succeeding stage. The vapor–liquid equilibrium for each feed component in the tower reacts in its unique way to the different pressure and temperature conditions at each of the stages. That means that each component establishes a different concentration in the vapor and liquid phases at each of the stages, and this results in the separation of the components. Some example trays are depicted in image 5. A more detailed, expanded image of two trays can be seen in the theoretical plate article. The reboiler often acts as an additional equilibrium stage.
If each physical tray or plate were 100% efficient, then the number of physical trays needed for a given separation would equal the number of equilibrium stages or theoretical plates. However, that is very seldom the case. Hence, a distillation column needs more plates than the required number of theoretical vapor–liquid equilibrium stages.
Packing
Another way of improving the separation in a distillation column is to use a packing material instead of trays. These offer the advantage of a lower pressure drop across the column (when compared to plates or trays), beneficial when operating under vacuum. If a distillation tower uses packing instead of trays, the number of necessary theoretical equilibrium stages is first determined and then the packing height equivalent to a theoretical equilibrium stage, known as the height equivalent to a theoretical plate (HETP), is also determined. The total packing height required is the number of theoretical stages multiplied by the HETP.
This packing material can either be random dumped packing such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors pass across this wetted surface, where mass transfer takes place. Unlike conventional tray distillation in which every tray represents a separate point of vapor–liquid equilibrium, the vapor–liquid equilibrium curve in a packed column is continuous. However, when modeling packed columns it is useful to compute a number of theoretical plates to denote the separation efficiency of the packed column with respect to more traditional trays. Differently shaped packings have different surface areas and void space between packings. Both of these factors affect packing performance.
Another factor in addition to the packing shape and surface area that affects the performance of random or structured packing is liquid and vapor distribution entering the packed bed. The number of theoretical stages required to make a given separation is calculated using a specific vapor to liquid ratio. If the liquid and vapor are not evenly distributed across the superficial tower area as it enters the packed bed, the liquid to vapor ratio will not be correct in the packed bed and the required separation will not be achieved. The packing will appear to not be working properly. The height equivalent to a theoretical plate (HETP) will be greater than expected. The problem is not the packing itself but the mal-distribution of the fluids entering the packed bed. Liquid mal-distribution is more frequently the problem than vapor. The design of the liquid distributors used to introduce the feed and reflux to a packed bed is critical to making the packing perform at maximum efficiency. Methods of evaluating the effectiveness of a liquid distributor can be found in references.
Overhead system arrangements
Images 4 and 5 assume an overhead stream that is totally condensed into a liquid product using water or air-cooling. However, in many cases, the tower overhead is not easily condensed totally and the reflux drum must include a vent gas outlet stream. In yet other cases, the overhead stream may also contain water vapor because either the feed stream contains some water or some steam is injected into the distillation tower (which is the case in the crude oil distillation towers in oil refineries). In those cases, if the distillate product is insoluble in water, the reflux drum may contain a condensed liquid distillate phase, a condensed water phase and a non-condensible gas phase, which makes it necessary that the reflux drum also have a water outlet stream.
Multicomponent distillation
Beside fractional distillation, that is mainly used for crude oil refining, multicomponent mixtures are usually processed in order to purify their single components by means of a series of distillation columns, i.e. the distillation train.
Distillation train
A distillation train is defined by a sequence of distillation columns arranged in series or in parallel whose aim is the multicomponent mixtures purification.
Process intensifying alternatives
The Dividing Wall Column unit is most common process-intensifying unit related to distillation. In particular, it is the arrangement in a single column shell of the Petlyuk configuration that has been proved to be thermodynamically equivalent.
Examples
Continuous distillation of crude oil
Petroleum crude oils contain hundreds of different hydrocarbon compounds: paraffins, naphthenes and aromatics as well as organic sulfur compounds, organic nitrogen compounds and some oxygen-containing hydrocarbons such as phenols. Although crude oils generally do not contain olefins, they are formed in many of the processes used in a petroleum refinery.
The crude oil fractionator does not produce products having a single boiling point; rather, it produces fractions having boiling ranges. For example, the crude oil fractionator produces an overhead fraction called "naphtha" which becomes a gasoline component after it is further processed through a catalytic hydrodesulfurizer to remove sulfur and a catalytic reformer to reform its hydrocarbon molecules into more complex molecules with a higher octane rating value.
The naphtha cut, as that fraction is called, contains many different hydrocarbon compounds. Therefore, it has an initial boiling point of about 35 °C and a final boiling point of about 200 °C. Each cut produced in the fractionating columns has a different boiling range. At some distance below the overhead, the next cut is withdrawn from the side of the column and it is usually the jet fuel cut, also known as a kerosene cut. The boiling range of that cut is from an initial boiling point of about 150 °C to a final boiling point of about 270 °C, and it also contains many different hydrocarbons. The next cut further down the tower is the diesel oil cut with a boiling range from about 180 °C to about 315 °C. The boiling ranges between any cut and the next cut overlap because the distillation separations are not perfectly sharp. After these come the heavy fuel oil cuts and finally the bottoms product, with very wide boiling ranges. All these cuts are processed further in subsequent refining processes.
Continuous distillation of cannabis concentrates
A typical application for distilling cannabis concentrates is butane hash oil (BHO). Short path distillation is a popular method due to the short residence time which allows for minimal thermal stress to the concentrate. In other distillation methods such as circulation, falling film and column distillation the concentrate would be damaged from the long residence times and high temperatures that must be applied.
| Physical sciences | Phase separations | Chemistry |
2769743 | https://en.wikipedia.org/wiki/Dromornis | Dromornis | Dromornis is a genus of large to enormous prehistoric birds native to Australia during the Oligocene to Pliocene epochs. The species were flightless, possessing greatly reduced wing structures but with large legs, similar to the modern ostrich or emu. They were likely to have been predominantly, if not exclusively, herbivorous browsers. The male of the largest species, Dromornis stirtoni, is a contender for the tallest and heaviest bird, and possibly exhibited aggressive territorial behaviour. They belong to the family Dromornithidae, extinct flightless birds known as mihirungs.
Taxonomy
The genus was erected to separate a new species, Dromornis australis, from the previously described Dinornis (giant moas), another lineage of ancient large and flightless birds found in New Zealand that was earlier described by Richard Owen in 1843. A femur that was forwarded to England, probably a dromornithid and since lost, suggested an Australian genus, but Owen withheld publication for many years. The type specimen, another femur, was found in a well at Peak Downs, Queensland, and subsequently described by Owen in 1872. Owen's new taxon was published in a series on prehistoric birds, read before the Zoological Society of London then appearing in its Transactions.
The name of the genus is derived from Ancient Greek, dromos meaning running, a race, and ornitho, a bird.
The genus and family are referred to as mihirung, distinguishing these birds from the giant emus. 'Mihirung paringmal' is an Aboriginal word from the Tjapwuring people of Western Victoria and it means 'giant bird'.
The placement of these dromornithid species may be summarised as
Dromornithidae (8 extinct species in 4 genera)
Dromornis
Dromornis australis Owen, 1872
Dromornis murrayi Worthy et al., 2016
Dromornis planei (Bullockornis planei Rich, 1979)
Dromornis stirtoni Rich, 1979
Barawertornis
Ilbandornis
Genyornis
The Dromornis lineage is proposed to represent a monotypic succession, from earliest to latest these are D. murrayi, D. planei, D. stirtoni, and this species, D. australis.
The dromornithid family are sometimes known by appellations such as Stirton's mihirung (D. stirtoni) to refer to each species. Nicknames describing the species as 'thunderbirds' etc. have appeared in reports of their discovery, later terms such as "demon ducks" refer to their relationship to the extant waterfowl of the galloanseres.
Description
Dromornis is a genus of large to gigantic flightless birds of the Dromornithidae family. Members of this family lived from 8 million years ago until less than 30,000 years ago. Although they looked like giant emus, Dromornis and its relatives are more closely related to the earliest waterfowl of the Anseriformes order or a basal galliform. Comparative studies using endocranial reconstructions of dromornithids, Ilbandornis and three Dromornis species, suggest that the head and bill of the Dromornis lineage became foreshortened.
The species resemble large birds of the Northern hemisphere, the Paleognathaes, of whom some descendants are known as ostriches and their allies. Like those ratites who also evolved alongside mammals, the diversity of species was very low, apparently monotypes that emerged in succession and increased in size.
Dromornis stirtoni is amongst the largest known birds, although Aepyornis maximus, a species of elephant bird from Madagascar, were likely just as heavy, if not heavier. The height of D. stirtoni would probably have met or exceeded the females of the tallest species of the genus Dinornis, the giant moa of New Zealand. (Some moa exhibited sexual dimorphism, with females tending to be larger than males.)
Species
D. australis
Dromornis australis fossils are found in Pliocene deposits of Australia. They were once considered the smallest species of the genus Dromornis, around three quarters the size of Dromornis stirtoni, until the discovery of Dromornis planei specimens were described in 2016.
Discovery
The fossil remains of a large femur were discovered at Peak Downs in Queensland, at a depth of around in a well shaft. This type of locality was described as an assemblage of boulders and pebbles beneath around thirty feet of alluvial soil; the femur was located over a boulder in the rock beds. The description of Dromornis australis by Richard Owen, best known for extensive work on the paleontology of Australian mammals, was the first of an extinct Australian avian species.
Owen had previously sought evidence of Dinornis in the palaeontological collections of early Australian excavations. A femur that he had noted in the appendix of Thomas Mitchell's explorations, found in a cave, did not allow him to confirm an alliance with any previously described species of large flightless birds. Owen withheld describing that specimen, now thought lost, until the type for this species emerged many years later. The new material had been found while digging a well at Peak Downs and forwarded to Owen via W. B. Clarke, a geologist employed by the state of New South Wales, with a remark by Gerard Krefft that placed it with the New Zealand moas of Dinornis. Richard Owen found affinities and distinctions in an osteological comparison to species of the extinct Dinornis and the extant Dromaius (the emu) and proposed that it represented a new genus.
Description
The species is known by the right femur, around twelve inches long, obtained at the Peak Downs site. The details of its deposition accompanied Owen's description, "The well was sunk through 30 feet of the black trappean alluvial soil common in that part of Australia, and then through 150 feet of drift pebbles and boulders, on one of which boulders ("at that depth," 150 feet?) rested a short, thick femur, so filled with mineral matter (calc spar and iron pyrites) as to give the internal structure more the appearance of a reptilian than an ornithic bone."
Owen notes the specimen was reported by W. B. Clarke, attributing it to Dinornis, in the Geological Magazine several years before.
The femur is similar in size to Ilbandornis woodburnei, another dromornithid species. Other osteological features of the specimens have been compared to Dromornis stirtoni, the gigantic "Stirton's thunderbird".
A comparative analysis that included this femur indicated morphological characters assignable to either Dromornis or a continuation of a Ilbandornis woodburnei lineage, allied to more gracile species of the family, but these results were not considered to be necessarily characteristic to any dromornithid genera. A fragment of synsacrum found at the Canadian deep lead mine near Gulgong has been tentatively assigned to Dromornis, the slight possibility that it is referable to this species might represent the continuation of the lineage as a smaller species into the Pliocene.
D. murrayi
Dromornis murrayi was described in 2016 using specimens discovered amongst the Riversleigh fauna in Queensland, Australia. The period during which it existed was the Oligocene to early Miocene, making this the earliest known species of the genus Dromornis. The size of these mihirungs was also determined to be the smallest of its genus. Dromornis murrayi was described from specimens of cranial and post cranial material.
The type material is the partial remains of a cranium, which was obtained at a locality named Hiatus A Site in the Carl Creek Limestone Formation; this location is one of the numerous study sites at the Riversleigh World Heritage Area. The specimens were discovered by two of the collaborating authors, Michael Archer and Suzanne J. Hand, the head researchers of taxa at the celebrated Riversleigh site and its associated fauna.
Description
This species stood around high and weighed up to , a considerable size but smaller than its congeners; the later species Dromornis stirtoni is determined to have been up to . The fossil specimens used to describe Dromornis murrayi have been dated to 26 million years ago, being discovered at a 'shelf', a rich layer of fossilised bones, that included leg and cranial remains of the unknown species.
Wings were greatly reduced, approximately , and would not have been evident beneath the bird's plumage. The skull cavity held an exceptionally small brain, the description's leading author Trevor Worthy suggesting the comparison, "I mean, if a chicken was silly, these things were very much more silly."
Distribution
The fossil deposits of Dromornis murrayi at the Hiatus site of Riversleigh have been dated as early Miocene and another as late Oligocene to early Miocene. This was established using correlation with the evolutionary stage of vertebrate species known from other sites at Riversleigh. Hiatus site is limestone deposited in an aquatic setting, lacking indicators for methods such as radiometric dating. Another site where the species occurs is Cadbury's Kingdom, designated as Faunal Zone B which is also dated as early Miocene.
The temporal range of these finds is approximately 25 to 16 mega-annum.
The only known occurrence of this species is amongst the Riversleigh fauna, the site is located in the northeastern region of the Australian continent.
D. planei
Dromornis planei, formerly placed in a separate genus Bullockornis, lived in the Middle Miocene, approximately 15 million years ago. It is known from specimens of the Bullock Creek fauna, fossils found in the Northern Territory of Australia. As large as an ostrich or emu, the species possessed a stocky build. A proposed common name, referring to its discoverer and locality, is Plane's bull bird. The site of its discovery was once semi-arid, containing low vegetation around seasonal wetlands and rivers.
The species was first described by Patricia Vickers-Rich in 1979, assigning it to a new genus Bullockornis. The description's first generic epithet was derived by a partial reference to the Bullock Creek Site and the greek word for bird ornis, and the common name bull bird proposed by the author for genus. The type is a fossilised section of the right femur, with other material, vertebrae and a rib, also referred to the same species. The specific epithet honours the discoverer of the vertebrae fossils, Michael Plane, thus the proposed trivial name of "Plane's Bull Bird".
Plane had been the first to investigate the Bullock Creek site, details of which were published in a 1968 paper.
It was one of several species of mihirungs, the dromornithids, that share ancestry with ducks and geese. The nickname "Demon Duck of Doom" is a reference to the large bill and body of the species. Fossil specimens of this species and other mihirungs are common, but the example of a near complete skull discovered in the 1980s was an unusual find. The direct evidence of the beak structure was evaluated in debate over the diet and habits of dromornithids.
The bird's generic name is improperly translated as "ox-bird", but was named instead for the type locality for the genus at Bullock Creek, Australia. In 2010, Nguyen and Boles first suggested that Bullockornis represents another species of Dromornis on the basis of many common traits observed in the cranial and postcranial skeleton of both taxa and their close relationship strongly supported by their phylogenetic analyses. Subsequent studies also agreed upon placing this species within the genus Dromornis.
Some paleontologists, including Peter Murray of the Central Australian Museum, believe that Bullockornis was related to geese and ducks. This, in addition to the bird's tremendous size and earlier misclassification as a carnivore, gave rise to its colourful nickname. It may be somewhat inaccurate, however, as other studies have recovered dromornithids as more closely related to Galliformes.
The existence of only this species at the Bullock Creek Site, as with the late Miocene Alcoota local fauna, correlates to the lack of diversity in large ratites, such as the evolution of the ostriches in the presence of a diversity of mammals.
Description
Dromornis planei was a very large flightless bird, similar in height to an ostrich or emu but with a heavier build; the species is however exceeded in size by the largest of these "thunder birds" Dromornis stirtoni. Its bill was curved and deep, the overall size of the head and skull was remarkably large.
The species stood approximately 2.5 metres (8 ft 2 in) tall. It may have weighed up to 250 kg (550 lb). Features of skull, including a very large beak suited to shearing, have made some researchers consider that the bird may have been carnivorous, but most currently agree that it was a herbivore. The bird's skull is larger than that of small horses.
The species is presumed to have had greatly reduced wing structures, as with other flightless birds the sternum was not keeled. The exceptionally large legs of D. planei enabled it to move its great mass relatively quickly.
Habitat
A species known from the Bullock Creek fossil fauna in the Northern Territory, the habitat during the time of deposition was a seasonally wet floodplain and river. The flora probably consisted of sedges and shrubs favouring a semi-arid climate. The area was occupied by herbivores favoring shrubland, horned turtles, marsupial tapirs and diprotodontid species, but the fauna associated with this site were rarely the forest dwelling paleospecies of the period. Other mihirungs also occur in the Bullock Creek fauna, species of Ilbandornis. Dromornis planei remains are found with other large contemporaries, such as the diprotodont Neohelos, and the crocodiles Baru that preyed upon them as they came to the water's edge.
The diet of these birds is uncertain, although it is determined that the bill was thin and had little bite force. Gastroliths are found with similar species of other regions, Genyornis, Ilbandornis and near relation Dromornis stirtoni, suggesting a herbivorous diet like the other species it is found alongside, yet suggestions have published that D. planei might have the carnivorous abilities attributed to the terror birds.
D. stirtoni
Dromornis stirtoni, colloquially known as Stirton's mihirung and Stirton's thunderbird, was a large feathered bird that grew up to heights of and weights in excess of 500 kg and is widely thought to have been the largest avian species to have ever existed. Patricia Vickers-Rich first discovered the remains of the bird in 1979 in the Alcoota Fossil Beds in the Northern Territory of Australia. Large amounts of fragmentary material found at the Alcoota fossil site in Central Australia, the type location, are the only certain occurrence of the bird. Rich proposed the specific epithet for fellow palaeontologist Ruben A. Stirton, an American who undertook extensive research on Australian taxa.
Description
Dromornis stirtoni was a large feathered bird which grew up over in height. This height is thought to have exceeded the tallest species of the genus Dinornis, which were the giant moas of New Zealand, and the Elephant Birds of Madagascar. This species is from the Dromornithidae family, which is a family of large flightless birds endemic to Australia. The weight of the animal is also thought to have been exceedingly large. Peter F. Murray and Patricia Vickers-Rich, in their work "Magnificent Mihirungs" (2004), utilised three varying scientific methods to derive the approximate weight and size of the D. stirtoni. This thorough analysis of the bones of D. stirtoni revealed that there was considerable sexual dimorphism, and that a fully grown male could weigh between , whilst a female would likely weigh between . The disparity in robustness was interpreted by the researchers as evidence of the biology of the species, behaviours such as incubation by the female, pair bonding, parental care and aggression while nesting, and courtship or display habits exhibited by extant waterfowl, the anseriforms. In comparison to other known ratite elephant birds of the Aepyornithidae family, this made D. stirtoni the heaviest of all known discoveries. D. stirtoni was compared to Aepyornis maximus by the authors, the largest of that family.
D. stirtoni was characterised by a deep lower jaw and a quadrate bone (which connects the upper and lower jaws) that was distinctly shaped. This narrow, deep bill, made up approximately two thirds of the skull. The front of this powerful jaw was used to cut, whilst the back of the jaw was used for crushing. Comparison of two partial crania with the near complete cranium of Dromornis planei (Bullockornis) shows the head of this species to be about 25% larger . Reconstruction of overlapping remains of the rostrum have revealed its form and size, the lower mandible would have been around 0.5 metres. The size and proportions of the head and its bill are comparable to that of mammals such as camels or horses
The large bird had "stubby", reduced wings, which ultimately deemed it flightless. However, whilst the bird was flightless, a strong development between the bony crests and tuberosities, where the wings were attached, allowed them to flap their wings. The bird was also characterised by its large hind legs, which after the completion of biomechanical studies are confirmed to have been muscular, rather than slender, due to the size of the muscle attachments along the leg. Due to the muscularity of these legs, D. stirtoni is thought to have possibly been capable of running at great speeds, whereas birds such as the emu depend on the slenderness of their legs to reach higher speeds. D. stirtoni was also characterised by its large, hoof-like toes, which had convex nails, rather than claws. Further typical of flightless birds, it did not have a breastbone.
Two forms of unearthed specimens are considered to be due to strong sexual dimorphism, concluded in a 2016 morphometric analysis using landmark based and actual measurements which also supported earlier conclusions regarding the species enormous size. This histological technique has been applied to other large and extinct avian species, including investigation into the paleobiology of the elephant birds Aepyornithidae.
Osteohistological analysis of its femora, tibiotarsi, and tarsometatarsi has also revealed that D. stirtoni was extremely K-selected, likely requiring over a decade to reach its adult body size, after which skeletal maturity occurred and its growth rate retarded.
Habitat
At present the only recorded fossil discoveries of Dromornis stirtoni have been from the Alcoota Fossil Beds. This region is renowned for the discovery of well-preserved vertebrate fossils from the Miocene epoch (24–5 million years ago). At this location, the fossil deposits are found in the Waite Formation, which consists of sandstones, limestones and siltstones. The various fossils that have been found within this region suggest that they were laid in episodical channels, characterised by a large series of interconnected lakes, within a large basin.
The vegetation type of the region in that period was open woodland favouring its semi-arid climate, within which seasonal rainfall occurs. D. stirtoni is found amongst the depositions of the Alcoota and Ongeva Local Faunas, dated to the Late Miocene and early Pliocene. Fragmentary remains are common at these sites, although little is assignable to an individual of the species. Some depositions contain fragments of around four individuals in disarray over an area of one square metre. Other dromornithid species have been found alongside this species, Ilbandornis woodburnei and the tentatively placed Ilbandornis lawsoni, resembling the large but more gracile modern birds such as ostriches and emus.
The concentration of dromornithid species, and more generally, other fossils within this area is indicative of the phenomenon known as "waterhole-tethering", whereby animals would accumulate within the immediate area of water sources, many of which would then die. Whilst this is the only location that D. stirtoni have been discovered, discovery of other species within the Dromornithidae family suggests that they may have been distributed across Australia. Various Dromornithidae fossils have been found in Riversleigh (Queensland) and Bullocks Creek (Northern Territory), as well as tracks in Pioneer (Tasmania).
D. stirtoni probably existed in an assemblage of fauna that included other dromornithids and browsing marsupials as the apex herbivores. The Alcoota Local Fauna were deposited at the only known upper Miocene fossil beds of Central Australia. The early conceptions of a fearsome bird receives some support from the proposed behaviour of the larger males aggressively defending a preferred range against competitors, other males or herbivores, and predators.
Feeding and diet
It is widely accepted that Dromornis stirtoni was herbivorous. This has been deduced from various features of its anatomy. One of these features is that the end of the bird's bill, does not have a hook, and that the beak is instead wide, narrow and blunt, typical of a herbivore. The bird also had hoof-like feet, rather than 'talons', which are typically associated with carnivores or omnivores. Lastly, analysis of the amino acids within the egg shells of D. stirtoni suggest that the species was herbivorous. Despite this however, there are various indicators that suggest the bird may have been carnivorous or omnivorous (Murray, 2004). The size and muscularity of the birds skull and beak would also suggest that they may not have been herbivores, as no source of vegetable food in their environment would have required such a powerful beak (Vickers-Rich, 1979). In recognition of the varying opinions, it is widely accepted that whilst the large bird may have occasionally scavenged or eaten smaller prey, they were mostly herbivorous.
Extinction
It is proposed that various factors may have contributed to the extinction of Dromornis stirtoni. Palaeontologists Murray and Vickers-Rich suggested that the diet may have overlapped considerably with the diets of other large birds and animals, and that the subsequent converging trophic morphology could have contributed to the large birds extinction as it was 'out-competed' of its food source. Alternative arguments have proposed that the large birds' breeding patterns may have contributed. It's suggested that D. stirtoni lived for a relatively long period of time in a group of older birds; however, for the few young that were produced, time to maturity was considerable. Subsequently, breeding adults were replaced slowly, which left the species highly vulnerable if breeding adults were lost.
| Biology and health sciences | Prehistoric birds | Animals |
5053663 | https://en.wikipedia.org/wiki/Skin%20care | Skin care | Skin care or skincare is a range of practices that support skin integrity, enhance its appearance, and relieve skin conditions. They can include nutrition, avoidance of excessive sun exposure, and appropriate use of emollients. Practices that enhance appearance include the use of cosmetics, botulinum, exfoliation, fillers, laser resurfacing, microdermabrasion, peels, retinol therapy, and ultrasonic skin treatment. Skin care is a routine daily procedure in many settings, such as skin that is either too dry or too moist, and prevention of dermatitis and prevention of skin injuries.
Skin care is a part of the treatment of wound healing, radiation therapy and some medications.
Background
Skin care is at the interface of cosmetics and dermatology.
The US Federal Food, Drug, and Cosmetic Act defines cosmetics as products intended to cleanse or beautify (for instance, shampoos and lipstick). A separate category exists for medications, which are intended to diagnose, cure, mitigate, treat, or prevent disease, or to affect the structure or function of the body (for instance, sunscreens and acne creams), although some products, such as moisturizing sunscreens and anti-dandruff shampoos, are regulated within both categories.
Skin care differs from dermatology by its inclusion of non-physician professionals, such as estheticians and wound care nursing staff. Skin care includes modifications of individual behavior and of environmental and working conditions.
Skin care by age
Neonate
Guidelines for neonatal skin care have been developed. Nevertheless, the pediatric and dermatological communities have not reached a consensus on best cleansing practices, as good quality scientific evidence is scarce. Immersion in water seems superior to washing alone, and use of synthetic detergents or mild liquid baby cleansers seems comparable or superior to water alone.
Children
Dermatologists normally recommend that children wash their skin with a mild cleanser, use moisturizing lotion as needed, and wear sunscreen every day.
Adolescents
Adolescents may be influenced by social media marketing to buy expensive skin care products, which are often not appropriate for their age (e.g., "anti-ageing" serums, which are for middle-aged and elderly people).
Elderly
Skin ageing is associated with increased skin vulnerability, and the texture and colour of the skin can change over time. Although wrinkles occur naturally due to ageing, smoking can intensify the appearance of wrinkles. Sunspots, dryness, wrinkles, and melanomas can occur from UV exposure over time, whether it be through the sun or through tanning beds. Exposure to UV can make skin less elastic. Skin problems including pruritus are common in the elderly but are often inadequately addressed. A literature review of studies assessing the maintenance of skin integrity in the elderly found most studies to have low levels of evidence, but the review concluded that skin-cleansing with synthetic detergents or amphoteric surfactants induced less skin dryness than using soap and water. Moisturizers with humectants helped with skin dryness and skin barrier occlusive reduced skin injuries. When taking baths or showers, using warm water rather than hot water can aid with dryness.
There is limited evidence that moisturizing soap bar; combinations of water soak, oil soak, and lotion are effective in maintaining the skin integrity of elderly people when compared to standard care.
Research
A systematic review examined the benefits and clinical efficacy of routine skin care activities, such as washing, bathing, and applying lotions, in acute and long-term care adult settings. The study led to a proposed 2-step program targeting adults with intact or preclinically damaged skin.
Sunscreen
Sun protection is an important aspect of skin care. Though the sun is beneficial in order for the human body to get its daily dose of vitamin D, unprotected excessive sunlight can cause extreme damage to the skin. Ultraviolet (UVA and UVB) radiation in the sun's rays can cause sunburn in varying degrees, early ageing and an increased risk of skin cancer. UV exposure can cause patches of uneven skin tone and dry out the skin. It can reduce skin's elasticity and encourage sagging and wrinkle formation.
Sunscreen can protect the skin from sun damage; sunscreen should be applied at least 20 minutes before exposure and should be re-applied every four hours. Sunscreen should be applied to all areas of the skin that will be exposed to sunlight, and at least a tablespoon (25 ml) should be applied to each limb, the face, chest, and back, to ensure thorough coverage. Many tinted moisturizers, foundations and primers now contain some form of SPF.
Sunscreens may come in the form of creams, gels or lotions; their SPF number indicates their effectiveness in protecting the skin from the sun's radiation. There are sunscreens available to suit every skin type; in particular, those with oily skin should choose non-comedogenic sunscreens; those with dry skins should choose sunscreens with moisturizers to help keep skin hydrated, and those with sensitive skin should choose unscented, hypoallergenic sunscreen and spot-test in an inconspicuous place (such as the inside of the elbow or behind the ear) to ensure that it does not irritate the skin.
Skin care by health concern
Acne
According to the American Academy of Dermatology, between 40 and 50 million Americans develop acne each year. Although many associate acne with adolescence, acne can occur at any age, with its causes including heredity, hormones, menstruation, food, and emotional stress.
Those with inflammatory acne should exfoliate with caution as the procedure may make conditions worse and consult a dermatologist before treatment. Some anti-acne creams contain drying agents such as benzoyl peroxide (in concentrations of 2.5 - 10% ).
Pressure sore
Pressure sores are injuries to the skin and underlying tissue as a result of prolonged pressure on the skin. A known example of a pressure sore is a bedsore called a pressure ulcer.
Stoma
When cleaning the stoma area, plain warm water should be use and dry wipe to gently clean around the stoma. Pat gently and make sure not to rub the area. Put all used wipes in a disposable bag and wash your hands after.
Wound healing
Wound healing is a complex and fragile process in which the skin repairs itself after injury. It is susceptible to interruption or failure that creates non-healing chronic wounds.
Radiation
Radiation induces skin reactions in the treated area, particularly in the axilla, head and neck, perineum and skin fold regions. Formulations with moisturising, anti-inflammatory, anti-microbial and wound healing properties are often used, but no preferred approach or individual product has been identified as best practice. Soft silicone dressings that act as barriers to friction may be helpful. In breast cancer, calendula cream may reduce the severity of radiation effects on the dark spot corrector. Deodorant use after completing radiation treatment has been controversial but is now recommended for practice.
EGFR side effects
Epidermal growth factor receptor (EGFR) inhibitors are medications used in cancer treatment. These medications commonly cause skin and nail problems, including rashes, dry skin and paronychia. Preventive intensive moisturizing with emollient ointments several times, avoidance of water-based creams and water soaks (although in certain circumstances white vinegar or potassium permanganate soaks may help), protection the skin from excessive exposure to sunshine, and soap substitutes which are less dehydrating for the skin than normal soaps, as well as shampoos that reduce the risk of scalp folliculitis, are recommended. Treatment measures with topical antibiotic medication can be helpful.
Related products
Cosmeceuticals are topically applied, combination products that bring together cosmetics and "biologically active ingredients". Products which are similar in perceived benefits but ingested orally are known as nutricosmetics. According to the United States Food and Drug Administration (FDA), the Food, Drug, and Cosmetic Act "does not recognize any such category as "cosmeceuticals." A product can be a drug, a cosmetic, or a combination of both, but the term "cosmeceutical" has no meaning under the law". Drugs are subject to an intensive review and approval process by FDA. Cosmetics, and these related products, although regulated, are not approved by FDA prior to sale.
Elaborate skin care routines are promoted on social media platforms such as TikTok. This has led to children and teens using harsh and inappropriate products, such as anti-aging products, which provide no benefit to young skin and may be harmful. It has also encouraged children and teens to wear sunscreen every day.
Procedures
Skin care procedures include use of botulinum; exfoliation; fillers; laser medicine in cosmetic resurfacing, hair removal, vitiligo, port-wine stain and tattoo removal; photodynamic therapy; microdermabrasion; peels; retinol therapy.
| Biology and health sciences | Hygiene and grooming: General | Health |
5054457 | https://en.wikipedia.org/wiki/European%20perch | European perch | The European perch (Perca fluviatilis), also known as the common perch, redfin perch, big-scaled redfin, English perch, Euro perch, Eurasian perch, Eurasian river perch, Hatch, poor man's rockfish or in Anglophone parts of Europe, simply the perch, is a predatory freshwater fish native to Europe and North Asia. It is the type species of the genus Perca.
The perch is a popular game fish for recreational anglers, and has been widely introduced beyond its native Eurasian habitats into Australia, New Zealand and South Africa. Known locally simply as "redfin", they have caused substantial damage to native fish populations in Australia and have been proclaimed a noxious species in New South Wales.
Taxonomy
The first scientific description of the river perch was made by Peter Artedi in 1730. He defined the basic morphological signs of this species after studying perch from Swedish lakes. Artedi described its features, counting the fin rays scales and vertebrae of the typical perch.
In 1758, Carl Linnaeus named it Perca fluviatilis. His description was based on Artedi's research.
Because of their similar appearance and ability to cross-breed, the yellow perch (Perca flavescens) has sometimes been classified as a subspecies of the European perch, in which case its trinomial name would be Perca fluviatilis flavescens.
Description
European perch are greenish with red pelvic, anal and caudal fins. They have five to eight dark vertical bars on their sides. When the perch grows larger, a hump grows between its head and dorsal fin.
European perch can vary greatly in size between bodies of water. They can live for up to 22 years, and older perch are often much larger than average; the maximum recorded length is . The British record is , but they grow larger in mainland Europe than in Britain. As of May 2016, the official all tackle world record recognised by the International Game Fish Association (IGFA) stands at for a Finnish fish caught September 4, 2010. In January 2010 a perch with a weight of was caught in the river Meuse, Netherlands. Due to the low salinity levels of the Baltic Sea, especially around the Finnish archipelago and Bothnian Sea, many freshwater fish live and thrive there. Perch especially are in abundance and grow to a considerable size due to the diet of Baltic herring.
Distribution and habitat
The range of the European perch covers fresh water basins all over Europe, excluding the Iberian Peninsula. Their range is known to reach the Kolyma River in Siberia to the east. It is also common in some of the brackish waters of the Baltic Sea.
The European perch lives in slow-flowing rivers, deep lakes and ponds. It tends to avoid cold or fast-flowing waters but some specimens penetrate waters of these type, although they do not breed in this habitat. They are most abundant in relatively shallow lakes and lakes with deep light penetration, and less abundant in deep lakes and those with low light penetration.
Introduction outside Europe
European perch has been widely introduced, with reported adverse ecological impact after introduction. In Australia, the species is implicated in the decline of the now-endangered native fish, the Macquarie perch.
Behaviour and reproduction
The European perch is carnivorous, with juveniles feeding on zooplankton, bottom invertebrate fauna and other perch fry, while adults feed on both invertebrates and fish, mainly sticklebacks, perch, roach and minnows. Perch start eating other fish when they become fingerlings at a size of around .
Male perch become sexually mature at between one and two years of age, females between two and four. In the Northern Hemisphere they spawn between February and July. Males reach spawning areas ahead of females, and court mates by chasing through underwater vegetation. During reproduction, the female lays a white ribbon of eggs up to one meter long, which is deposited on water plants or on the branches of trees or shrubs immersed in the water. There has been speculation, but only anecdotal evidence, that eggs stick to the legs of wading birds and are then transferred to other waters.
The eggs hatch after a period of 8 to 16 days. The larvae are long on hatching, and live in open water where they feed on plankton. Juveniles migrate to areas nearer the shore and bottom during their first summer.
Diseases and parasites
Cucullanus elegans is a species of parasitic nematode. It is an endoparasite of the European perch. Juvenile perch are commonly infected by Camallanus lacustris (Nematoda), Proteocephalus percae, Bothriocephalus claviceps, Glanitaenia osculata, Triaenophorus nodulosus (all Cestoda) and Acanthocephalus lucii (Acanthocephala).
Predators
The European perch is a frequent prey of many fish-eating predators such as the Western osprey (Pandion haliaetus), great cormorant (Phalacrocorax carbo) and common kingfisher (Alcedo atthis), and it is an important item in the diet of the globally threatened Dalmatian pelican (Pelecanus crispus). Other non-avian predators include the northern pike (Esox lucius) and the Eurasian otter (Lutra lutra).
Relationship with humans
Fishing
European perch is fished for food and as game. Its flesh is described as good eating, with a white, firm, flaky texture and well flavoured. According to FAO statistics, 28,920 tonnes were caught in 2013. Largest perch fishing countries were Russia, (15,242 tonnes), Finland (7,666 tonnes), Estonia (2,144 t), Poland (1,121 t) and Kazakhstan (1,103 t).
Baits for perch include baitfishes (e.g. minnows, goldfish), weather loaches, pieces of raw squid or pieces of raw fish (mackerel, bluey, jack mackerel, sardine), or brandling, red, marsh, and lob worms, maggots, shrimp (Caridina, Neocaridina, Palaemon, Macrobrachium) and peeled crayfish tails. The tackles needed are fine but strong.
Artificial lures are also effective, particularly for medium-sized perch. It is possible to fly fish for perch using artificial flies tied for the purpose. Often, the flies required are "streamers" or bait-fish imitations and use flash, colour and movement to entice a take from the perch.
Perch in culture
The European perch is Finland's national fish.
It is also pictured in emblems of several European towns and municipalities, such as Bad Buchau, Gröningen and Schönberg, Plön.
The raw fish item in the game Factorio is a plush toy of the European perch.
| Biology and health sciences | Acanthomorpha | Animals |
530013 | https://en.wikipedia.org/wiki/Personal%20grooming | Personal grooming | Grooming (also called preening) is the art and practice of cleaning and maintaining parts of the body. It is a species-typical behavior.
In animals
Individual animals regularly clean themselves and put their fur, feathers or other skin coverings in good order. This activity is known as personal grooming, a form of hygiene. Extracting foreign objects such as insects, leaves, dirt, twigs and parasites is a form of grooming. Among animals, birds spend considerable time preening their feathers. This is done to remove ectoparasites, keep the feathers in good aerodynamic condition, and waterproof them. To do that, they use the preen oil secreted by the uropygial gland, the dust of down feathers, or other means such as dust-bathing or anting. During oil spills, animal conservationists that rescue penguins sometimes dress them in knitted sweaters to stop them from preening and thereby ingesting the mineral oil, which is poisonous. Monkeys may also pick out nits from their fur or scratch their rears to keep themselves clean. Cats are well known for their extensive grooming. Cats groom so often that they often produce hairballs from the fur they ingest. Many mammal species also groom their genitals after copulation.
Grooming as a social activity
Many social animals adapt preening and grooming behaviors for other social purposes such as bonding and the strengthening of social structures. Grooming plays a particularly important role in forming social bonds in many primate species, such as chacma baboons and wedge-capped capuchins.
Mutual grooming in human relationships
In humankind, mutual grooming relates closely to social grooming, which is defined as the process by which human beings fulfill one of their basic instincts, such as socializing, cooperating and learning from each other.
In research conducted by Holly Nelson (from the University of New Hampshire), individuals who chose their romantic partner reported more mutual grooming than others who focused in other types of relationships. Hence, this study hypothesized that mutual grooming related to relationship satisfaction, trust and previous experience of affection within the family. They claim that even though humans do not groom each other with the same fervor that other species do, they are groomers par excellence. Therefore, human mutual grooming plays an important role in pair bonding.
In the same investigation, researchers found that individuals with more promiscuous attitudes and those who scored high on the anxiety sub-scale on an adult attachment style measure tend to groom their partners more frequently. These findings were also consistent with some of the functions of grooming: potential parental indicator, developing trust and courtship or flirtation.
A recent empirical study by Seinenu Thein-Lemelson (University of California, Berkeley) utilized an ethological approach to examine cross-cultural differences in human grooming as it pertains to caregiving behaviors. Naturalistic data was collected through video focal follows with children during routine activities and then coded for grooming behaviors. This cross-cultural comparison of urban families in Burma and the United States indicates that there are significant cross-cultural differences in rates of caregiver-to-child grooming. Burmese caregivers in the sample groomed children more often than caregivers in the United States. Additionally, children in the United States have short instances of concentrated grooming predominantly during daily activities that are structured explicitly around hygiene goals (bath time), in contrast to the Burmese child, whose grooming is distributed more evenly within and across daily activities. The Burmese parents maintained a constant vigilance with regard to risk of infection. The study is significant because it is the only study of human grooming to utilize naturalistic data.
Gallery
| Biology and health sciences | Hygiene and grooming: General | Health |
530115 | https://en.wikipedia.org/wiki/Husky | Husky | Husky is a general term for a dog used in the polar regions, primarily and specifically for work as sled dogs. It refers to a traditional northern type, notable for its cold-weather tolerance and overall hardiness. Modern racing huskies that maintain arctic breed traits (also known as Alaskan huskies) represent an ever-changing crossbreed of the fastest dogs.
Huskies have continued to be used in sled-dog racing, as well as expedition and trek style tour businesses, and as a means of essential transportation in rural communities. Huskies are also kept as pets, and groups work to find new pet homes for retired racing and adventure-trekking dogs.
Etymology
The term "husky" first came into usage in the mid to late 1700s. At this time, "Esquimaux" or "Eskimo" was a common term for pre-Columbian Arctic inhabitants of North America. Several dialectal permutations were in use including Uskee, Uskimay and Huskemaw. Thus, dogs used by Arctic people were the dogs of the Huskies, the Huskie's dogs, and eventually simply the husky dogs. Canadian and American settlers, not well versed on Russian geography, would later extend the word to Chukotka sled dogs imported from Russia, thus giving rise to the term Siberian husky.
History
Nearly all dogs' genetic closeness to the gray wolf is due to admixture. However, several Arctic breeds also show a genetic closeness with the now-extinct Taimyr wolf of North Asia due to admixture: the Siberian Husky and Greenland Dog (which are also historically associated with Arctic human populations) and to a lesser extent, the Shar Pei and Finnish Spitz. An admixture graph of the Greenland Dog indicates a best-fit of 3.5% shared material; however, an ancestry proportion ranging between 1.4% and 27.3% is consistent with the data and indicates admixture between the Taimyr wolf and the ancestors of these four high-latitude breeds.
This introgression could have provided early dogs living in high latitudes with phenotypic variation beneficial for adaption to a new and challenging environment, contributing significantly to the development of the husky. It also indicates that the ancestry of present-day dog breeds descends from more than one region.
Characteristics
Huskies are energetic and athletic. They are distinguished by their hardiness and cold-weather tolerance, in contrast to many modern sprint sled dogs derived from hound and pointer crossbreeds and purebred sprinting dogs which do not have or retain these qualities. Likewise, they are distinguished from laika, as they were not developed for the primary purpose of hunting game and prey animals.
Huskies typically have a thick double coat that may come in a variety of colors. The double coat generally protects huskies against harsh winters and, contrary to what most believe, they can survive in hotter climates. During the hotter climates, huskies tend to shed their undercoat regularly to cool their bodies. In addition to shedding, huskies control their eating habits based on the season; in cooler climates, they tend to eat generous amounts, causing their digestion to generate heat, whilst in warmer climates, they eat less. Their eyes are typically pale blue, although they may also be brown, green, blue, yellow, or heterochromic. Huskies are more prone to some degree of uveitis than most other breeds.
Breeds
This is a list of dog breeds which contain "husky" in their name. To see a complete list of sled breeds, see Sled dog.
Alaskan husky
The most commonly used dog in dog sled racing, the Alaskan husky is a mongrel bred specifically for its performance as a sled dog. The modern Alaskan husky reflects 100 years or more of crossbreeding with English Pointers, German Shepherd Dogs, Salukis and other breeds to improve its performance. They typically weigh between and may have dense or sleek fur. Alaskan huskies bear little resemblance to the typical husky breeds they originated from, or to each other.
Labrador Husky
The Labrador Husky originated in the Canadian region of Labrador. The breed probably arrived in the area with the Inuit who came to Canada around 1300 AD. Despite the name, Labrador huskies are not related to the Labrador retriever, but in fact most closely related to the Canadian Eskimo Dog. There are estimated to be 50–60 Labrador huskies in the world.
Mackenzie River Husky
The term Mackenzie River husky describes several overlapping historical populations of Arctic and sub-Arctic sled dog-type dogs, none of which constituted a breed. Dogs from the Yukon Territory were crossed with large European breeds such as St. Bernards and Newfoundlands to create a powerful freighting dog capable of surviving harsh arctic conditions during the Klondike Gold Rush.
Sakhalin Husky
The Sakhalin Husky is a critically endangered landrace and sled laika associated with Sakhalin Island and adjacent areas. They are also known Karafuto Ken, Sakhalin Laika, or Gilyak Laika. While bred primarily as a sled dog, Sakhalin Huskies are also used for hunting bear and fishing. There are approximately 20 Sakhalin Huskies remaining on Sakhalin Island.
Siberian Husky
The Siberian Husky is smaller than the similar-appearing Alaskan Malamute. They are descendants of the Chukotka sled dogs bred and used by the native Chukchi people of Siberia, a people of Paleosiberian origin, around the year 2000 BC. Imported to Alaska in the early 1900s, they were used as working dogs and racing sled dogs in Nome, Alaska throughout the 1910s, often dominating the All-Alaska Sweepstakes. They later became widely bred by recreational mushers and show-dog fanciers in the U.S. and Canada as the Siberian Husky, after the popularity garnered from the 1925 serum run to Nome. Siberians stand 20–23.5 inches, weigh between 35 and 60lbs (35-50 for females, 45-60 for males), and have been selectively bred for both appearance and pulling ability. They are still used regularly today as sled dogs by competitive, recreational, and tour-guide mushers.
| Biology and health sciences | Dogs | Animals |
530525 | https://en.wikipedia.org/wiki/Hydrocortisone | Hydrocortisone | Hydrocortisone is the name for the hormone cortisol when supplied as a medication. It is a corticosteroid and works as an anti-inflammatory and by immune suppression. Uses include conditions such as adrenocortical insufficiency, adrenogenital syndrome, high blood calcium, thyroiditis, rheumatoid arthritis, dermatitis, asthma, and COPD. It is the treatment of choice for adrenocortical insufficiency. It can be given by mouth, topically, or by injection. Stopping treatment after long-term use should be done slowly.
Side effects may include mood changes, increased risk of infection, and edema (swelling). With long-term use, common side effects include osteoporosis, upset stomach, physical weakness, easy bruising, and candidiasis (yeast infections). It is unclear if it is safe for use during pregnancy.
Hydrocortisone was patented in 1936 and approved for medical use in 1941. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 202nd most commonly prescribed medication in the United States, with more than 2million prescriptions.
Medical uses
Hydrocortisone is the pharmaceutical term for cortisol used in oral administration, intravenous injection, or topical application. It is used as an immunosuppressive drug, given by injection in the treatment of severe allergic reactions such as anaphylaxis and angioedema, in place of prednisolone in patients needing steroid treatment but unable to take oral medication, and perioperatively in patients on long-term steroid treatment to prevent an adrenal crisis. It may also be injected into inflamed joints resulting from diseases such as gout.
It may be used topically for allergic rashes, eczema, psoriasis, itching, and other inflammatory skin conditions. Topical hydrocortisone creams and ointments are available in most countries without prescription in strengths ranging from 0.05% to 2.5% (depending on local regulations) with stronger forms available by prescription only.
It may also be used rectally in suppositories to relieve the swelling, itch, and irritation in hemorrhoids.
It may be used as an acetate form (hydrocortisone acetate), which has slightly different pharmacokinetics and pharmacodynamics.
Pharmacology
Pharmacodynamics
Hydrocortisone is a corticosteroid, acting specifically as both a glucocorticoid and as a mineralocorticoid. That is, it is an agonist of the glucocorticoid and mineralocorticoid receptors.
Hydrocortisone has low potency relative to synthetic corticosteroids. Compared to hydrocortisone, prednisolone is about 4times as potent and dexamethasone about 40times as potent in terms of anti-inflammatory effect. Prednisolone can also be used as cortisol replacement, and at replacement dose levels (rather than anti-inflammatory levels), prednisolone is about 8times more potent than cortisol. The equivalent doses and relative potencies of hydrocortisone compared to various other synthetic corticosteroids have also been reviewed and summarized.
The endogenous production rate of cortisol is approximately 5.7 to 9.9mg/m2 per day, which corresponds to an oral hydrocortisone dose of approximately 15 to 20mg/day (for a 70-kg person). One review described daily cortisol production of 10mg in healthy volunteers and reported that daily cortisol production could increase up to 400mg in conditions of severe stress (e.g., surgery).
The total and/or free concentrations of cortisol/hydrocortisone required for various glucocorticoid effects have been determined.
Pharmacokinetics
Absorption
The bioavailability of oral hydrocortisone is about 96% ± 20% (SD). The pharmacokinetics of hydrocortisone are non-linear. The peak level of oral hydrocortisone is 15.3 ± 2.9 (SD) μg/L per 1mg dose. The time to peak concentrations of oral hydrocortisone is 1.2 ± 0.4 (SD) hours.
The topical percutaneous absorption of hydrocortisone varies widely depending on experimental circumstances and has been reported to range from 0.5 to 14.9% in different studies. Some skin application sites, like the scrotum and vulva, absorb hydrocortisone much more efficiently than other application sites, like the forearm. In one study, the amount of hydrocortisone absorbed ranged from 0.2% to 36.2% depending on the application site, with the ball of the foot having the lowest absorption and the scrotum having the highest absorption. The absorption of hydrocortisone by the vulva has ranged from 4.4 to 8.1%, relative to 1.3 to 2.8% for the arm, in different studies and subjects.
Distribution
Most cortisol in the blood (all but about 4%) is bound to proteins, including corticosteroid binding globulin (CBG) and serum albumin. A pharmacokinetic review stated that 92% ± 2% (SD) (92–93%) of hydrocortisone is plasma protein-bound. Free cortisol passes easily through cellular membranes. Inside cells it interacts with corticosteroid receptors.
Metabolism
Hydrocortisone is metabolized by 11β-hydroxysteroid dehydrogenases (11β-HSDs) into cortisone, an inactive metabolite. It is additionally 5α-, 5β-, and 3α-reduced into dihydrocortisols, dihydrocortisones, tetrahydrocortisols, and tetrahydrocortisones.
Elimination
The elimination half-life of hydrocortisone ranges from about 1.2 to 2.0 (SD) hours, with an average of around 1.5hours, regardless of oral versus parenteral administration. The duration of action of systemic hydrocortisone has been listed as 8 to 12hours.
Chemistry
Hydrocortisone, also known as 11β,17α,21-trihydroxypregn-4-ene-3,20-dione, is a naturally occurring pregnane steroid. A variety of hydrocortisone esters exist and have been marketed for medical use.
Society and culture
Legal status
In March 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Efmody, intended for the treatment of congenital adrenal hyperplasia (CAH) in people aged twelve years and older. The applicant for this medicinal product is Diurnal Europe BV. Hydrocortisone (Efmody) was approved for medical use in the European Union, in May 2021, for the treatment of congenital adrenal hyperplasia (CAH) in people aged twelve years and older.
Anti-competitive practices
In the UK, the Competition and Markets Authority (CMA) concluded an investigation into the supply of hydrocortisone tablets, finding that from October 2008 onwards, drug suppliers Auden McKenzie and Actavis plc had charged "excessive and unfair prices" for 10mg and 20mg tablets and entered into agreements with potential competitors, paying companies who agreed not to enter the hydrocortisone market and enabling Auden McKenzie and Actavis to supply the drugs as "generic" rather than branded products and thereby escape price controls until eventually other companies entered the market. Auden and Actavis overcharged the UK's National Health Service for over ten years. Fines totalling over £255m were levied against the companies involved in this breach of competition law.
Research
COVID-19
Hydrocortisone was found to be effective in reducing mortality rate of critically ill COVID-19 patients when compared to other usual care or a placebo.
| Biology and health sciences | Specific drugs | Health |
530559 | https://en.wikipedia.org/wiki/Ideal%20%28order%20theory%29 | Ideal (order theory) | In mathematical order theory, an ideal is a special subset of a partially ordered set (poset). Although this term historically was derived from the notion of a ring ideal of abstract algebra, it has subsequently been generalized to a different notion. Ideals are of great importance for many constructions in order and lattice theory.
Definitions
A subset of a partially ordered set is an ideal, if the following conditions hold:
is non-empty,
for every x in and y in P, implies that y is in ( is a lower set),
for every x, y in , there is some element z in , such that and ( is a directed set).
While this is the most general way to define an ideal for arbitrary posets, it was originally defined for lattices only. In this case, the following equivalent definition can be given:
a subset of a lattice is an ideal if and only if it is a lower set that is closed under finite joins (suprema); that is, it is nonempty and for all x, y in , the element of P is also in .
A weaker notion of order ideal is defined to be a subset of a poset that satisfies the above conditions 1 and 2. In other words, an order ideal is simply a lower set. Similarly, an ideal can also be defined as a "directed lower set".
The dual notion of an ideal, i.e., the concept obtained by reversing all ≤ and exchanging with is a filter.
Frink ideals, pseudoideals and Doyle pseudoideals are different generalizations of the notion of a lattice ideal.
An ideal or filter is said to be proper if it is not equal to the whole set P.
The smallest ideal that contains a given element p is a and p is said to be a of the ideal in this situation. The principal ideal for a principal p is thus given by .
Terminology confusion
The above definitions of "ideal" and "order ideal" are the standard ones, but there is some confusion in terminology. Sometimes the words and definitions such as "ideal", "order ideal", "Frink ideal", or "partial order ideal" mean one another.
Prime ideals
An important special case of an ideal is constituted by those ideals whose set-theoretic complements are filters, i.e. ideals in the inverse order. Such ideals are called s. Also note that, since we require ideals and filters to be non-empty, every prime ideal is necessarily proper. For lattices, prime ideals can be characterized as follows:
A subset of a lattice is a prime ideal, if and only if
is a proper ideal of P, and
for all elements x and y of P, in implies that or .
It is easily checked that this is indeed equivalent to stating that is a filter (which is then also prime, in the dual sense).
For a complete lattice the further notion of a is meaningful.
It is defined to be a proper ideal with the additional property that, whenever the meet (infimum) of some arbitrary set is in , some element of A is also in .
So this is just a specific prime ideal that extends the above conditions to infinite meets.
The existence of prime ideals is in general not obvious, and often a satisfactory amount of prime ideals cannot be derived within ZF (Zermelo–Fraenkel set theory without the axiom of choice).
This issue is discussed in various prime ideal theorems, which are necessary for many applications that require prime ideals.
Maximal ideals
An ideal is a if it is proper and there is no proper ideal J that is a strict superset of . Likewise, a filter F is maximal if it is proper and there is no proper filter that is a strict superset.
When a poset is a distributive lattice, maximal ideals and filters are necessarily prime, while the converse of this statement is false in general.
Maximal filters are sometimes called ultrafilters, but this terminology is often reserved for Boolean algebras, where a maximal filter (ideal) is a filter (ideal) that contains exactly one of the elements {a, ¬a}, for each element a of the Boolean algebra. In Boolean algebras, the terms prime ideal and maximal ideal coincide, as do the terms prime filter and maximal filter.
There is another interesting notion of maximality of ideals: Consider an ideal and a filter F such that is disjoint from F. We are interested in an ideal M that is maximal among all ideals that contain and are disjoint from F. In the case of distributive lattices such an M is always a prime ideal. A proof of this statement follows.
However, in general it is not clear whether there exists any ideal M that is maximal in this sense. Yet, if we assume the axiom of choice in our set theory, then the existence of M for every disjoint filter–ideal-pair can be shown. In the special case that the considered order is a Boolean algebra, this theorem is called the Boolean prime ideal theorem. It is strictly weaker than the axiom of choice and it turns out that nothing more is needed for many order-theoretic applications of ideals.
Applications
The construction of ideals and filters is an important tool in many applications of order theory.
In Stone's representation theorem for Boolean algebras, the maximal ideals (or, equivalently via the negation map, ultrafilters) are used to obtain the set of points of a topological space, whose clopen sets are isomorphic to the original Boolean algebra.
Order theory knows many completion procedures to turn posets into posets with additional completeness properties. For example, the ideal completion of a given partial order P is the set of all ideals of P ordered by subset inclusion. This construction yields the free dcpo generated by P. An ideal is principal if and only if it is compact in the ideal completion, so the original poset can be recovered as the sub-poset consisting of compact elements. Furthermore, every algebraic dcpo can be reconstructed as the ideal completion of its set of compact elements.
History
Ideals were introduced by Marshall H. Stone first for Boolean algebras, where the name was derived from the ring ideals of abstract algebra. He adopted this terminology because, using the isomorphism of the categories of Boolean algebras and of Boolean rings, the two notions do indeed coincide.
Generalization to any posets was done by Frink.
| Mathematics | Order theory | null |
530563 | https://en.wikipedia.org/wiki/Argentavis | Argentavis | Argentavis is an extinct genus of teratornithid known from three sites in the Epecuén and Andalhualá Formations in central and northwestern Argentina dating to the Late Miocene (Huayquerian). The type species, A. magnificens, is sometimes called the giant teratorn. Argentavis was among the largest flying birds to ever exist, holding the record for heaviest flying bird, although it was surpassed in wingspan after the 2014 description of Pelagornis sandersi, which is estimated to have possessed wings some 20% longer than those of Argentavis.
Discovery and naming
In the 1970s while on an expedition by the Museo de La Plata, paleontologists Rosendo Pascual and Eduardo Tonni unearthed a fragmentary skeleton consisting of a partial skull, right coracoid, left humerus, portions of the left ulna, left radius, and left metacarpals, and shafts of the right tibiotarsus and tarsometatarsus. Later restudy of the specimen also found an incomplete ungual phalanx with the skeleton. These fossils had been exposed in brown-red silt and clay sediments from the Epecuén Formation in Salinas Grandes de Hidalgo in Atreucó, Argentina. These outcrops derive from the Huayquerian stage of the upper Miocene (9.0-6.8 mya). This specimen was deposited at the Museo de La Plata under catalogue number MLP 65-VII-29-49 and cast at the Los Angeles County Museum.
These fossils were described by paleontologists Kenneth Campbell Jr. and Eduardo Tonni in 1980, who named the new genus and species Argentavis magnificens with MLP 65-VII-29-49 as the holotype specimen. The generic name Argentavis is derived from the Latin root argentum, “silver”, after the country of origin, and avis, “bird”, while the specific name magnificens, “magnificent”, refers to its size. In the description, Argentavis was classified as a member of Teratornithidae and was the first described from South America. Since Argentavis' description, Taubatornis was named and a multitude of specimens described from the continent. Later in 1995, Campbell described three additional Argentavis specimens that had been discovered in other sites in Argentina. One, an ungual phalanx, was unearthed in an Epecuén Formation outcrops around 60 km northeast of the holotype locality. Campbell assigned it to A. magnificens based on the development of grooves and tubercles on the bone, however due to the lack of overlap with the holotype and its robust morphology a 2011 article classified it as phorusrhacid. Additionally, a fragmentary coracoid and the distal end of a tibiotarsus were collected from sediments of the Huayquerian-aged Andalhualá Formation in Valle de Santa María in Catamarca Province, northwest Argentina.
Description
The single known humerus (upper arm bone) of Argentavis is somewhat damaged. Even so, it allows a fairly accurate estimate of its length in life. Argentavis''' humerus was only slightly shorter than an entire human arm. The species had stout, strong legs and large feet which indicate decent terrestrial capabilities. The bill was large, rather slender, and had a hooked tip with a wide gape.
Size
Estimates for Argentavis' wingspan vary widely depending on the method used for scaling, i.e. regression analyses or comparisons with the California condor. At one time, published wingspans for the species measured up to in width, but more recent estimates put the wingspan within the range of . Recent studies present doubts on the wingspan of the species reaching or exceeding . At the time of description, Argentavis was the largest flying bird known to have existed but it has since been exceeded by another extinct species, Pelagornis sandersi, in wingspan, which the 2014 description estimated at . For comparison, the living bird with the largest wingspan is the wandering albatross, averaging and spanning up to . When grounded, Argentavis' height has been estimated at , roughly equivalent to that of an adult human. Furthermore, its total length (from bill tip to tail tip) was approximately .
Prior publications estimated the body mass of Argentavis at , but more refined techniques show a more typical mass would likely have been somewhere between , although weights could have varied depending on conditions. Argentavis still retains the title of the heaviest known flying bird by a considerable margin, with the aforementioned P. sandersi being estimated to have weighed no more than . Since A. magnificens is known to have lived in terrestrial environments, another good point of comparison is the Andean condor, the largest extant flighted land bird both in average wingspan and weight, with the former spanning up to with an average of around , and the latter reaching a maximum of up to . New World vultures such as the condor are thought to be the closest living relatives to Argentavis and other teratorns. Average weights are much lower in both the wandering albatross and Andean condor than in Argentavis, at approximately and , respectively.
As a rule of thumb, a wing loading of 25 kg/m2 is considered the limit for avian flight. A number of estimates related to wing loading have been produced for Argentavis, most notably the wing area, estimated at , and the wing loading, estimated at 84.6 N/m2 (1.77 lb/ft2), or about 8.64 kg/m2. The heaviest extant flying birds are known to weigh up to a maximum of (there are several contenders, among which are the European great bustard and the African kori bustard). An individual mute swan, which may have lost the power of flight due to extreme weight, was found to have weighed . Meanwhile, the sarus crane is the tallest flying bird alive, at up to tall, standing about as high as Argentavis due to its long legs and neck.
Paleobiology
Life history
Comparison with extant birds suggests Argentavis laid one or two eggs with a mass of around every two years. Climate considerations make it likely that the birds incubated during the winter, with members of a mated pair alternating between incubating and procuring food every few days. The young are thought to have been independent after some 16 months, but to not reach full maturity until they reached roughly twelve years of age. To maintain a viable population, no more than 2% of birds could have died each year. Because of its large size and ability to fly, Argentavis suffered hardly any predation, and mortality was mainly related to old age and disease in adults.
Flight
From the size and structure of its wings, it is inferred that A. magnificens flew mainly by soaring, using flapping flight only during short periods. This is further supported by skeletal evidence, which suggests that its breast muscles were not powerful enough to enable flapping of the wings for extended periods. Studies on condor flight indicate that Argentavis was fully capable of flight in normal conditions, as modern large soaring birds spend very little time flapping their wings regardless of environment.
Although its legs were strong enough to provide it with a running or jumping start, the wings were simply too long to flap effectively until the bird had gained some vertical distance, meaning that, especially for takeoff, Argentavis would have depended on the wind. Argentavis may have used mountain slopes and headwinds to take off, and probably could manage to do so even from gently sloped terrain with little effort. It may have flown and lived much like the modern Andean condor, scanning large areas of land for carrion. It is probable that it utilised thermal currents to stay aloft, and it has been estimated that the minimal velocity for A. magnificens is about or . The climate of the Andean foothills in Argentina during the late Miocene was warmer and drier than today, which would have further aided the bird in staying aloft atop thermal updrafts.
Food Argentavis territories probably measured more than , which the birds screened for food, possibly utilizing a north–south flying pattern to avoid being slowed by adverse winds. This species seems less aerodynamically suited for predation than its relatives and probably preferred to scavenge for carrion. Argentavis may have used its wings and size to intimidate metatherian mammals and small phorusrhacids to take over their kills. Phorusrhacids were the largest land predators in Miocene South America, and probably the biggest threats that Argentavis faced, with the largest species that coexisted with Argentavis, Devincenzia, weighing up to .
| Biology and health sciences | Prehistoric birds | Animals |
530691 | https://en.wikipedia.org/wiki/Glucocorticoid | Glucocorticoid | Glucocorticoids (or, less commonly, glucocorticosteroids) are a class of corticosteroids, which are a class of steroid hormones. Glucocorticoids are corticosteroids that bind to the glucocorticoid receptor that is present in almost every vertebrate animal cell. The name "glucocorticoid" is a portmanteau (glucose + cortex + steroid) and is composed from its role in regulation of glucose metabolism, synthesis in the adrenal cortex, and its steroidal structure (see structure below).
Glucocorticoids are part of the feedback mechanism in the immune system, which reduces certain aspects of immune function, such as inflammation. They are therefore used in medicine to treat diseases caused by an overactive immune system, such as allergies, asthma, autoimmune diseases, and sepsis. Glucocorticoids have many diverse effects such as pleiotropy, including potentially harmful side effects. They also interfere with some of the abnormal mechanisms in cancer cells, so they are used in high doses to treat cancer. This includes inhibitory effects on lymphocyte proliferation, as in the treatment of lymphomas and leukemias, and the mitigation of side effects of anticancer drugs.
Glucocorticoids affect cells by binding to the glucocorticoid receptor. The activated glucocorticoid receptor-glucocorticoid complex up-regulates the expression of anti-inflammatory proteins in the nucleus (a process known as transactivation) and represses the expression of pro-inflammatory proteins in the cytosol by preventing the translocation of other transcription factors from the cytosol into the nucleus (transrepression).
Glucocorticoids are distinguished from mineralocorticoids and sex steroids by their specific receptors, target cells, and effects. In technical terms, "corticosteroid" refers to both glucocorticoids and mineralocorticoids (as both are mimics of hormones produced by the adrenal cortex), but is often used as a synonym for "glucocorticoid". Glucocorticoids are chiefly produced in the zona fasciculata of the adrenal cortex, whereas mineralocorticoids are synthesized in the zona glomerulosa.
Cortisol (or hydrocortisone) is the most important human glucocorticoid. It is essential for life, and it regulates or supports a variety of important cardiovascular, metabolic, immunologic, and homeostatic functions. Increases in glucocorticoid concentrations are an integral part of stress response and are the most commonly used biomarkers to measure stress. Glucocorticoids have numerous non-stress-related functions as well, and glucocorticoid concentrations can increase in response to pleasure or excitement. Various synthetic glucocorticoids are available; these are widely utilized in general medical practice and numerous specialties, either as replacement therapy in glucocorticoid deficiency or to suppress the body's immune system.
Effects
Glucocorticoid effects may be broadly classified into two major categories: immunological and metabolic. In addition, glucocorticoids play important roles in fetal development and body fluid homeostasis.
Immune
Glucocorticoids function via interaction with the glucocorticoid receptor:
Upregulate the expression of anti-inflammatory proteins.
Downregulate the expression of proinflammatory proteins.
Glucocorticoids are also shown to play a role in the development and homeostasis of T lymphocytes. This has been shown in transgenic mice with either increased or decreased sensitivity of T cell lineage to glucocorticoids.
Metabolic
The name "glucocorticoid" derives from early observations that these hormones were involved in glucose metabolism. In the fasted state, cortisol stimulates several processes that collectively serve to increase and maintain normal concentrations of glucose in the blood.
Metabolic effects:
Stimulation of gluconeogenesis, in particular, in the liver: This pathway results in the synthesis of glucose from non-hexose substrates, such as amino acids and glycerol from triglyceride breakdown, and is particularly important in carnivores and certain herbivores. Enhancing the expression of enzymes involved in gluconeogenesis is probably the best-known metabolic function of glucocorticoids.
Mobilization of amino acids from extrahepatic tissues: These serve as substrates for gluconeogenesis.
Inhibition of glucose uptake in muscle and adipose tissue: A mechanism to conserve glucose
Stimulation of fat breakdown in adipose tissue: The fatty acids released by lipolysis are used for production of energy in tissues like muscle, and the released glycerol provide another substrate for gluconeogenesis.
Increase in sodium retention and potassium excretion leads to hypernatremia and hypokalemia
Increase in hemoglobin concentration, likely due to hindrance of the ingestion of red blood cell by macrophage or other phagocyte.
Increased urinary uric acid
Increased urinary calcium and hypocalcemia
Alkalosis
Leukocytosis
Excessive glucocorticoid levels resulting from administration as a drug or hyperadrenocorticism have effects on many systems. Some examples include inhibition of bone formation, suppression of calcium absorption (both of which can lead to osteoporosis), delayed wound healing, muscle weakness, and increased risk of infection. These observations suggest a multitude of less-dramatic physiologic roles for glucocorticoids.
Developmental
Glucocorticoids have multiple effects on fetal development. An important example is their role in promoting maturation of the lung and production of the surfactant necessary for extrauterine lung function. Mice with homozygous disruptions in the corticotropin-releasing hormone gene (see below) die at birth due to pulmonary immaturity. In addition, glucocorticoids are necessary for normal brain development, by initiating terminal maturation, remodeling axons and dendrites, and affecting cell survival and may also play a role in hippocampal development. Glucocorticoids stimulate the maturation of the Na+/K+/ATPase, nutrient transporters, and digestion enzymes, promoting the development of a functioning gastro-intestinal system. Glucocorticoids also support the development of the neonate's renal system by increasing glomerular filtration.
Arousal and cognition
Glucocorticoids act on the hippocampus, amygdala, and frontal lobes. Along with adrenaline, these enhance the formation of flashbulb memories of events associated with strong emotions, both positive and negative. This has been confirmed in studies, whereby blockade of either glucocorticoids or noradrenaline activity impaired the recall of emotionally relevant information. Additional sources have shown subjects whose fear learning was accompanied by high cortisol levels had better consolidation of this memory (this effect was more important in men). The effect that glucocorticoids have on memory may be due to damage specifically to the CA1 area of the hippocampal formation.
In multiple animal studies, prolonged stress (causing prolonged increases in glucocorticoid levels) have shown destruction of the neurons in the hippocampus area of the brain, which has been connected to lower memory performance.
Glucocorticoids have also been shown to have a significant impact on vigilance (attention deficit disorder) and cognition (memory). This appears to follow the Yerkes-Dodson curve, as studies have shown circulating levels of glucocorticoids vs. memory performance follow an upside-down U pattern, much like the Yerkes-Dodson curve. For example, long-term potentiation (LTP; the process of forming long-term memories) is optimal when glucocorticoid levels are mildly elevated, whereas significant decreases of LTP are observed after adrenalectomy (low-glucocorticoid state) or after exogenous glucocorticoid administration (high-glucocorticoid state). Elevated levels of glucocorticoids enhance memory for emotionally arousing events, but lead more often than not to poor memory for material unrelated to the source of stress/emotional arousal. In contrast to the dose-dependent enhancing effects of glucocorticoids on memory consolidation, these stress hormones have been shown to inhibit the retrieval of already stored information. Long-term exposure to glucocorticoid medications, such as asthma and anti-inflammatory medication, has been shown to create deficits in memory and attention both during and, to a lesser extent, after treatment, a condition known as "steroid dementia".
Body fluid homeostasis
Glucocorticoids could act centrally, as well as peripherally, to assist in the normalization of extracellular fluid volume by regulating body's action to atrial natriuretic peptide (ANP). Centrally, glucocorticoids could inhibit dehydration-induced water intake; peripherally, glucocorticoids could induce a potent diuresis.
Mechanism of action
Transactivation
Glucocorticoids bind to the cytosolic glucocorticoid receptor, a type of nuclear receptor that is activated by ligand binding. After a hormone binds to the corresponding receptor, the newly formed complex translocates itself into the cell nucleus, where it binds to glucocorticoid response elements in the promoter region of the target genes resulting in the regulation of gene expression. This process is commonly referred to as transcriptional activation, or transactivation.
The proteins encoded by these up-regulated genes have a wide range of effects, including, for example:
Anti-inflammatory – lipocortin I, p11/calpactin binding protein, secretory leukocyte protease inhibitor 1 (SLPI), and Mitogen-activated protein kinase phosphatase (MAPK phosphatase)
Increased gluconeogenesis – glucose 6-phosphatase and tyrosine aminotransferase
Transrepression
The opposite mechanism is called transcriptional repression, or transrepression. The classical understanding of this mechanism is that activated glucocorticoid receptor binds to DNA in the same site where another transcription factor would bind, which prevents the transcription of genes that are transcribed via the activity of that factor. While this does occur, the results are not consistent for all cell types and conditions; there is no generally accepted, general mechanism for transrepression.
New mechanisms are being discovered where transcription is repressed, but the activated glucocorticoid receptor is not interacting with DNA, but rather with another transcription factor directly, thus interfering with it, or with other proteins that interfere with the function of other transcription factors. This latter mechanism appears to be the most likely way that activated glucocorticoid receptor interferes with NF-κB - namely by recruiting histone deacetylase, which deacetylate the DNA in the promoter region leading to closing of the chromatin structure where NF-κB needs to bind.
Nongenomic effects
Activated glucocorticoid receptor has effects that have been experimentally shown to be independent of any effects on transcription and can only be due to direct binding of activated glucocorticoid receptor with other proteins or with mRNA.
For example, Src kinase which binds to inactive glucocorticoid receptor, is released when a glucocorticoid binds to glucocorticoid receptor, and phosphorylates a protein that in turn displaces an adaptor protein from a receptor important in inflammation, epidermal growth factor, reducing its activity, which in turn results in reduced creation of arachidonic acid – a key proinflammatory molecule. This is one mechanism by which glucocorticoids have an anti-inflammatory effect.
Pharmacology
A variety of synthetic glucocorticoids, some far more potent than cortisol, have been created for therapeutic use. They differ in both pharmacokinetics (absorption factor, half-life, volume of distribution, clearance) and pharmacodynamics (for example the capacity of mineralocorticoid activity: retention of sodium (Na) and water; renal physiology). Because they permeate the intestines easily, they are administered primarily per os (by mouth), but also by other methods, such as topically on skin. More than 90% of them bind different plasma proteins, though with a different binding specificity. Endogenous glucocorticoids and some synthetic corticoids have high affinity to the protein transcortin (also called corticosteroid-binding globulin), whereas all of them bind albumin. In the liver, they quickly metabolize by conjugation with a sulfate or glucuronic acid, and are secreted in the urine.
Glucocorticoid potency, duration of effect, and the overlapping mineralocorticoid potency vary. Cortisol is the standard of comparison for glucocorticoid potency. Hydrocortisone is the name used for pharmaceutical preparations of cortisol.
The data below refer to oral administration. Oral potency may be less than parenteral potency because significant amounts (up to 50% in some cases) may not reach the circulation. Fludrocortisone acetate and deoxycorticosterone acetate are, by definition, mineralocorticoids rather than glucocorticoids, but they do have minor glucocorticoid potency and are included in this table to provide perspective on mineralocorticoid potency.
Therapeutic use
Glucocorticoids may be used in low doses in adrenal insufficiency. In much higher doses, oral or inhaled glucocorticoids are used to suppress various allergic, inflammatory, and autoimmune disorders. Inhaled glucocorticoids are the second-line treatment for asthma. They are also administered as post-transplantory immunosuppressants to prevent the acute transplant rejection and the graft-versus-host disease. Nevertheless, they do not prevent an infection and also inhibit later reparative processes. Newly emerging evidence showed that glucocorticoids could be used in the treatment of heart failure to increase the renal responsiveness to diuretics and natriuretic peptides. Glucocorticoids are historically used for pain relief in inflammatory conditions. However, corticosteroids show limited efficacy in pain relief and potential adverse events for their use in tendinopathies.
Replacement
Any glucocorticoid can be given in a dose that provides approximately the same glucocorticoid effects as normal cortisol production; this is referred to as physiologic, replacement, or maintenance dosing. This is approximately 6–12 mg/m2/day of hydrocortisone (m2 refers to body surface area (BSA), and is a measure of body size; an average man's BSA is 1.9 m2).
Therapeutic immunosuppression
Glucocorticoids cause immunosuppression, and the therapeutic component of this effect is mainly the decreases in the function and numbers of lymphocytes, including both B cells and T cells.
The major mechanism for this immunosuppression is through inhibition of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB). NF-κB is a critical transcription factor involved in the synthesis of many mediators (i.e., cytokines) and proteins (i.e., adhesion proteins) that promote the immune response. Inhibition of this transcription factor, therefore, blunts the capacity of the immune system to mount a response.
Glucocorticoids suppress cell-mediated immunity by inhibiting genes that code for the cytokines IL-1, IL-2, IL-3, IL-4, IL-5, IL-6, IL-8 and IFN-γ, the most important of which is IL-2. Smaller cytokine production reduces the T cell proliferation.
Glucocorticoids, however, not only reduce T cell proliferation, but also lead to another well known effect - glucocorticoid-induced apoptosis. The effect is more prominent in immature T cells still inside in the thymus, but peripheral T cells are also affected. The exact mechanism regulating this glucocorticoid sensitivity lies in the Bcl-2 gene.
Glucocorticoids also suppress the humoral immunity, thereby causing a humoral immune deficiency. Glucocorticoids cause B cells to express smaller amounts of IL-2 and of IL-2 receptors. This diminishes both B cell clone expansion and antibody synthesis. The diminished amounts of IL-2 also cause fewer T lymphocyte cells to be activated.
The effect of glucocorticoids on Fc receptor expression in immune cells is complicated. Dexamethasone decreases IFN-gamma stimulated Fc gamma RI expression in neutrophils while conversely causing an increase in monocytes. Glucocorticoids may also decrease the expression of Fc receptors in macrophages, but the evidence supporting this regulation in earlier studies has been questioned. The effect of Fc receptor expression in macrophages is important since it is necessary for the phagocytosis of opsonised cells. This is because Fc receptors bind antibodies attached to cells targeted for destruction by macrophages.
Anti-inflammatory
Glucocorticoids are potent anti-inflammatories, regardless of the inflammation's cause; their primary anti-inflammatory mechanism is lipocortin-1 (annexin-1) synthesis. Lipocortin-1 both suppresses phospholipase A2, thereby blocking eicosanoid production, and inhibits various leukocyte inflammatory events (epithelial adhesion, emigration, chemotaxis, phagocytosis, respiratory burst, etc.). In other words, glucocorticoids not only suppress immune response, but also inhibit the two main products of inflammation, prostaglandins and leukotrienes. They inhibit prostaglandin synthesis at the level of phospholipase A2 as well as at the level of cyclooxygenase/PGE isomerase (COX-1 and COX-2), the latter effect being much like that of NSAIDs, thus potentiating the anti-inflammatory effect.
In addition, glucocorticoids also suppress cyclooxygenase expression.
Glucocorticoids marketed as anti-inflammatories are often topical formulations, such as nasal sprays for rhinitis or inhalers for asthma. These preparations have the advantage of only affecting the targeted area, thereby reducing side effects or potential interactions. In this case, the main compounds used are beclometasone, budesonide, fluticasone, mometasone and ciclesonide. In rhinitis, sprays are used. For asthma, glucocorticoids are administered as inhalants with a metered-dose or dry powder inhaler. In rare cases, symptoms of radiation induced thyroiditis has been treated with oral glucocorticoids.
Hyperaldosteronism
Glucocorticoids can be used in the management of familial hyperaldosteronism type 1. They are not effective, however, for use in the type 2 condition.
Heart failure
Glucocorticoids could be used in the treatment of decompensated heart failure to potentiate renal responsiveness to diuretics, especially in heart failure patients with refractory diuretic resistance with large doses of loop diuretics.
Resistance
Resistance to the therapeutic uses of glucocorticoids can present difficulty; for instance, 25% of cases of severe asthma may be unresponsive to steroids. This may be the result of genetic predisposition, ongoing exposure to the cause of the inflammation (such as allergens), immunological phenomena that bypass glucocorticoids, pharmacokinetic disturbances (incomplete absorption or accelerated excretion or metabolism) and viral and/or bacterial respiratory infections.
Side effects
Glucocorticoid drugs currently being used act nonselectively, so in the long run they may impair many healthy anabolic processes. To prevent this, much research has been focused recently on the elaboration of selectively acting glucocorticoid drugs. Side effects include:
Immunodeficiency (see section below)
Hyperglycemia due to increased gluconeogenesis, insulin resistance, and impaired glucose tolerance ("steroid diabetes"); caution in those with diabetes mellitus
Increased skin fragility, easy bruising
Negative calcium balance due to reduced intestinal calcium absorption
Steroid-induced osteoporosis: reduced bone density (osteoporosis, osteonecrosis, higher fracture risk, slower fracture repair)
Weight gain due to increased visceral and truncal fat deposition (central obesity) and appetite stimulation; see corticosteroid-induced lipodystrophy
Hypercortisolemia with prolonged or excessive use (also known as, exogenous Cushing's syndrome)
Impaired memory and attention deficits See steroid dementia syndrome.
Adrenal insufficiency (if used for long time and stopped suddenly without a taper)
Muscle and tendon breakdown (proteolysis), weakness, reduced muscle mass and repair
Expansion of malar fat pads and dilation of small blood vessels in skin
Lipomatosis within the epidural space
Excitatory effect on central nervous system (euphoria, psychosis)
Anovulation, irregularity of menstrual periods
Growth failure, delayed puberty
Increased plasma amino acids, increased urea formation, negative nitrogen balance
Glaucoma due to increased ocular pressure
Cataracts
Topical steroid withdrawal
In high doses, hydrocortisone (cortisol) and those glucocorticoids with appreciable mineralocorticoid potency can exert a mineralocorticoid effect as well, although in physiologic doses this is prevented by rapid degradation of cortisol by 11β-hydroxysteroid dehydrogenase isoenzyme 2 (11β-HSD2) in mineralocorticoid target tissues. Mineralocorticoid effects can include salt and water retention, extracellular fluid volume expansion, hypertension, potassium depletion, and metabolic alkalosis.
Immunodeficiency
Glucocorticoids cause immunosuppression, decreasing the function and/or numbers of neutrophils, lymphocytes (including both B cells and T cells), monocytes, macrophages, and the anatomical barrier function of the skin. This suppression, if large enough, can cause manifestations of immunodeficiency, including T cell deficiency, humoral immune deficiency and neutropenia.
Withdrawal
In addition to the effects listed above, use of high-dose glucocorticoids for only a few days begins to produce suppression of the patient's adrenal glands suppressing hypothalamic corticotropin-releasing hormone (CRH) leading to suppressed production of adrenocorticotropic hormone (ACTH) by the anterior pituitary. With prolonged suppression, the adrenal glands atrophy (physically shrink), and can take months to recover full function after discontinuation of the exogenous glucocorticoid.
During this recovery time, the patient is vulnerable to adrenal insufficiency during times of stress, such as illness. While suppressive dose and time for adrenal recovery vary widely, clinical guidelines have been devised to estimate potential adrenal suppression and recovery, to reduce risk to the patient. The following is one example:
If patients have been receiving daily high doses for five days or less, they can be abruptly stopped (or reduced to physiologic replacement if patients are adrenal-deficient). Full adrenal recovery can be assumed to occur by a week afterward.
If high doses were used for six to 10 days, reduce to replacement dose immediately and taper over four more days. Adrenal recovery can be assumed to occur within two to four weeks of completion of steroids.
If high doses were used for 11–30 days, cut immediately to twice replacement, and then by 25% every four days. Stop entirely when dose is less than half of replacement. Full adrenal recovery should occur within one to three months of completion of withdrawal.
If high doses were used more than 30 days, cut dose immediately to twice replacement, and reduce by 25% each week until replacement is reached. Then change to oral hydrocortisone or cortisone as a single morning dose, and gradually decrease by 2.5 mg each week. When the morning dose is less than replacement, the return of normal basal adrenal function may be documented by checking 0800 cortisol levels prior to the morning dose; stop drugs when 0800 cortisol is 10 μg/dl. Predicting the time to full adrenal recovery after prolonged suppressive exogenous steroids is difficult; some people may take nearly a year.
Flare-up of the underlying condition for which steroids are given may require a more gradual taper than outlined above.
| Biology and health sciences | Animal hormones | Biology |
530715 | https://en.wikipedia.org/wiki/Residual-current%20device | Residual-current device | A residual-current device (RCD), residual-current circuit breaker (RCCB) or ground fault circuit interrupter (GFCI) is an electrical safety device, more specifically a form of Earth-leakage protection device, that interrupts an electrical circuit when the current passing through a conductor is not equal and opposite in both directions, therefore indicating leakage current to ground or current flowing to another powered conductor. The device's purpose is to reduce the severity of injury caused by an electric shock. This type of circuit interrupter cannot protect a person who touches both circuit conductors at the same time, since it then cannot distinguish normal current from that passing through a person.
If the RCD has additional overcurrent protection integrated into the same device, then it is referred to as a residual-current circuit breaker with integrated overcurrent protection (RCBO).
These devices are designed to quickly interrupt the protected circuit when it detects that the electric current is unbalanced between the supply and return conductors of the circuit. Any difference between the currents in these conductors indicates leakage current, which presents a shock hazard. Alternating 60 Hz current above 20 mA (0.020 amperes) through the human body is potentially sufficient to cause cardiac arrest or serious harm if it persists for more than a small fraction of a second. RCDs are designed to disconnect the conducting wires ("trip") quickly enough to potentially prevent serious injury to humans, and to prevent damage to electrical devices.
RCDs are testable and resettable devices—a test button safely creates a small leakage condition, and another button, or switch, resets the conductors after a fault condition has been cleared. Some RCDs disconnect both the and neutral conductors upon a fault (double pole), while a single pole RCD only disconnects the conductor. If the fault has left the neutral wire "floating" or not at its expected ground potential for any reason, then a single-pole RCD will leave this conductor still connected to the circuit when it detects the fault.
Purpose and operation
RCDs are designed to disconnect the circuit if there is a leakage current. In their first implementation in the 1950s, power companies used them to prevent electricity theft where consumers grounded returning circuits rather than connecting them to neutral to inhibit electrical meters from registering their power consumption.
The most common modern application is as a safety device to detect small leakage currents (typically 5–30mA) and disconnecting quickly enough (<30 milliseconds) to prevent device damage or electrocution. They are an essential part of the automatic disconnection of supply (ADS), i.e. to switch off when a fault develops, rather than rely on human intervention, one of the essential tenets of modern electrical practice.
To reduce the risk of electrocution, RCDs should operate within 25–40 milliseconds with any leakage currents (through a person) of greater than 30mA, before electric shock can drive the heart into ventricular fibrillation, the most common cause of death through electric shock. By contrast, conventional circuit breakers or fuses only break the circuit when the total current is excessive (which may be thousands of times the leakage current an RCD responds to). A small leakage current, such as through a person, can be a very serious fault, but would probably not increase the total current enough for a fuse or overload circuit breaker to isolate the circuit, and not fast enough to save a life.
RCDs operate by measuring the current balance between two conductors using a differential current transformer. This measures the difference between current flowing through and neutral. If these do not sum to zero, there is a leakage of current to somewhere else (to Earth/ground or to another circuit), and the device will open its contacts. Operation does not require a fault current to return through the earth wire in the installation; the trip will operate just as well if the return path is through plumbing or contact with the ground or anything else. Automatic disconnection and a measure of shock protection is therefore still provided even if the earth wiring of the installation is damaged or incomplete.
For an RCD used with three-phase power, all three conductors and the neutral (if fitted) must pass through the current transformer.
Application
Electrical plugs with incorporated RCD are sometimes installed on appliances that might be considered to pose a particular safety hazard, for example long extension leads, which might be used outdoors, or garden equipment or hair dryers, which may be used near a bath or sink. Occasionally an in-line RCD may be used to serve a similar function to one in a plug. By putting the RCD in the extension lead, protection is provided at whatever outlet is used even if the building has old wiring, such as knob and tube, or wiring that does not contain a grounding conductor. The in-line RCD can also have a lower tripping threshold than the building to further improve safety for a specific electrical device.
In North America, GFI receptacles can be used in cases where there is no grounding conductor, but they must be labeled as "no equipment ground". This is referenced in the National Electric Code section 406 (D) 2, however codes change and someone should always consult a licensed professional and their local building and safety departments. The code is An ungrounded GFI receptacle will trip using the built-in "test" button, but will not trip using a GFI test plug, because the plug tests by passing a small current from to the non-existent ground. It is worth noting that despite this, only one GFCI receptacle at the beginning of each circuit is necessary to protect downstream receptacles. There does not appear to be a risk of using multiple GFI receptacles on the same circuit, though it is considered redundant.
In Europe, RCDs can fit on the same DIN rail as the miniature circuit breakers; much like in miniature circuit breakers, the busbar arrangements in consumer units and distribution boards provides protection for anything downstream.
RCBO
A pure RCD will detect imbalance in the currents of the supply and return conductors of a circuit. But it cannot protect against overload or short circuit like a fuse or a miniature circuit breaker (MCB) does (except for the special case of a short circuit from to ground, not to neutral).
However, an RCD and an MCB often come integrated in the same device, thus being able to detect both supply imbalance and overload current. Such a device is called an RCBO, for residual-current circuit breaker with overcurrent protection, in Europe and Australia, and a GFCI breaker, for ground fault circuit interrupter, in the United States and Canada.
Typical design
The diagram depicts the internal mechanism of a residual-current device (RCD). The device is designed to be wired in-line in an appliance power cord. It is rated to carry a maximal current of 13A and is designed to trip on a leakage current of 30mA. This is an active RCD; that is, it latches electrically and therefore trips on power failure, a useful feature for equipment that could be dangerous on unexpected re-energisation. Some early RCDs were entirely electromechanical and relied on finely balanced sprung over-centre mechanisms driven directly from the current transformer. As these are hard to manufacture to the required accuracy and prone to drift in sensitivity both from pivot wear and lubricant dry-out, the electronically-amplified type with a more robust solenoid part as illustrated are now dominant.
In the internal mechanism of an RCD, the incoming supply and the neutral conductors are connected to the terminals at (1), and the outgoing load conductors are connected to the terminals at (2). The earth conductor (not shown) is connected through from supply to load uninterrupted.
When the reset button (3) is pressed, the contacts ((4) and another, hidden behind (5)) close, allowing current to pass. The solenoid (5) keeps the contacts closed when the reset button is released.
The sense coil (6) is a differential current transformer which surrounds (but is not electrically connected to) the and neutral conductors. In normal operation, all the current flows in and out of the and neutral conductors. The amount of current in the two conductors is equal and opposite and cancel each other out.
Any fault to earth (for example caused by a person touching a live component in the attached appliance) causes some of the current to take a different path, with some of the neutral current diverted, which means that there is then an imbalance in the current between the and neutral conductors (single-phase), or, more generally a nonzero sum of currents from among various conductors (for example, three phase conductors and one neutral conductor), within the RCD.
This difference causes a magnetic flux in the toroidal sense coil (6), which, if sufficiently large, activates the relay (5), causing the switch to activate forcing the contacts (4) apart and thus cutting off the electricity supply to the appliance. In some designs a power failure may also cause the switch contacts to open, causing the safe trip-on-power-failure behaviour mentioned above.
The test button (8) allows the correct operation of the device to be verified by passing a small current through the orange test wire (9). This simulates a fault by creating a deliberate imbalance in the sense coil. If the RCD does not trip when this button is pressed, then the device must be replaced.
RCD with integral overcurrent protection (RCBO or GFCI breaker)
Residual-current and over-current protection may be combined in one device for installation into the service panel; this device is known as a GFCI (Ground-Fault Circuit Interrupter) breaker in the US and Canada, and as a RCBO (residual-current circuit breaker with over-current protection) in Europe and Australia. They are effectively a combination of a RCD and a MCB. In the US, GFCI breakers are more expensive than GFCI outlets.
As well as requiring both and neutral inputs and outputs (or, full three-phase), some RCDs/GFCIs require a functional earth (FE) connection. This serves to provide both EMC immunity and to reliably operate the device if the input-side neutral connection is lost but and earth remain.
For reasons of space, many devices, especially in DIN rail format, use flying leads rather than screw terminals, especially for the neutral input and FE connections. Additionally, because of the small form factor, the output cables of some models (Eaton/MEM) are used to form the primary winding of the RCD part, and the outgoing circuit cables must be led through a specially dimensioned terminal tunnel with the current transformer part around it. This can lead to incorrect failed trip results when testing with meter probes from the screw heads of the terminals, rather than from the final circuit wiring.
Having one RCD feeding another is generally unnecessary, provided they have been wired properly. One exception is the case of a TT earthing system, where the earth loop impedance may be high, meaning that a ground fault might not cause sufficient current to trip an ordinary circuit breaker or fuse. In this case a special 100mA (or greater) trip current time-delayed RCD is installed, covering the whole installation, and then more sensitive RCDs should be installed downstream of it for sockets and other circuits that are considered high-risk.
RCD with additional arc fault protection circuitry
In addition to ground fault circuit interrupters (GFCIs), arc-fault circuit interrupters (AFCI) are important as they offer added protection from potentially hazardous arc faults resulting from damage in branch circuit wiring as well as extensions to branches such as appliances and cord sets. By detecting arc faults and responding by interrupting power, AFCIs help reduce the likelihood of the home's electrical system being an ignition source of a fire. Dual function AFCI/GFCI devices offer both electrical fire prevention and shock prevention in one device making them a solution for many rooms in the home.
Characteristics
Differences in disconnection actions
Major differences exist regarding the manner in which an RCD unit will act to disconnect the power to a circuit or appliance.
There are four situations in which different types of RCD units are used:
At the consumer power distribution level, usually in conjunction with an RCBO resettable circuit breaker;
Built into a wall socket;
Plugged into a wall socket, which may be part of a power-extension cable; and
Built into the cord of a portable appliance, such as those intended to be used in outdoor or wet areas.
The first three of those situations relate largely to usage as part of a power-distribution system and are almost always of the passive or latched variety, whereas the fourth relates solely to specific appliances and are always of the active or non-latching variety. Active means prevention of any re-activation of the power supply after any inadvertent form of power outage, as soon as the mains supply becomes re-established; latch relates to a switch inside the unit housing the RCD that remains as set following any form of power outage, but has to be reset manually after the detection of an error condition.
In the fourth situation, it would be deemed to be highly undesirable, and probably very unsafe, for a connected appliance to automatically resume operation after a power disconnection, without having the operator in attendanceas such, manual reactivation of the RCD is necessary.
The difference between the modes of operation of the essentially two different types of RCD functionality is that the operation for power distribution purposes requires the internal latch to remain set within the RCD after any form of power disconnection caused by either the user turning the power off, or after any power outage; such arrangements are particularly applicable for connections to refrigerators and freezers.
Situation two is mostly installed just as described above, but some wall socket RCDs are available to fit the fourth situation, often by operating a switch on the fascia panel.
RCDs for the first and third situation are most commonly rated at 30mA and 40ms. For the fourth situation, there is generally a greater choice of ratings availablegenerally all lower than the other forms, but lower values often result in more nuisance tripping. Sometimes users apply protection in addition to one of the other forms, when they wish to override those with a lower rating. It may be wise to have a selection of type four RCDs available, because connections made under damp conditions or using lengthy power cables are more prone to trip-out when any of the lower ratings of RCD are used; ratings as low as 10mA are available.
Number of poles and pole terminology
The number of poles represents the number of conductors that are interrupted when a fault condition occurs. RCDs used on single-phase AC supplies (two current paths), such as domestic power, are usually one- or two-pole designs, also known as single- and double-pole. A single-pole RCD interrupts only the energized conductor, while a double-pole RCD interrupts both the energized and return conductors. (In a single-pole RCD, the return conductor is usually anticipated to be at ground potential at all times and therefore safe on its own).
RCDs with three or more poles can be used on three-phase AC supplies (three current paths) or to disconnect the neutral conductor as well, with four-pole RCDs used to interrupt three-phase and neutral supplies. Specially designed RCDs can also be used with both AC and DC power distribution systems.
The following terms are sometimes used to describe the manner in which conductors are connected and disconnected by an RCD:
Single-pole or one-pole – the RCD will disconnect the energized wire only.
Double-pole or two-pole – the RCD will disconnect both the energized and return wires.
1+N and 1P+N – non-standard terms used in the context of RCBOs, at times used differently by different manufacturers. Typically these terms may signify that the return (neutral) conductor is an isolating pole only, without a protective element (an unprotected but switched neutral), that the RCBO provides a conducting path and connectors for the return (neutral) conductor but this path remains uninterrupted when a fault occurs (sometimes known as "solid neutral"), or that both conductors are disconnected for some faults (such as RCD detected leakage) but only one conductor is disconnected for other faults (such as overload).
Sensitivity
RCD sensitivity is expressed as the rated residual operating current, noted IΔn. Preferred values have been defined by the IEC, thus making it possible to divide RCDs into three groups according to their IΔn value:
high sensitivity (HS): 5** – 10 – 30mA (for direct-contact or life injury protection),
medium sensitivity (MS): 100 – 300 – 500 – 1000mA (for fire protection),
low sensitivity (LS): 3 – 10 – 30A (typically for protection of machine).
The 5mA sensitivity is typical for GFCI outlets.
Break time (response speed)
There are two groups of devices. 'G' (general use) 'instantaneous' RCDs have no intentional time delay. They must never trip at one-half of the nominal current rating, but must trip within 200 milliseconds for rated current, and within 40 milliseconds at five times rated current. 'S' (selective) or 'T' (time-delayed) RCDs have a short time delay. They are typically used at the origin of an installation for fire protection to discriminate with 'G' devices at the loads, and in circuits containing surge suppressors. They must not trip at one-half of rated current. They provide at least 130 milliseconds delay of tripping at rated current, 60 milliseconds at twice rated, and 50 milliseconds at five times rated. The maximum break time is 500ms at rated current, 200ms at twice rated, and 150ms at five times rated.
Programmable earth fault relays are available to allow co-ordinated installations to minimise outage. For example, a power distribution system might have a 300mA, 300ms device at the service entry of a building, feeding several 100mA 'S' type at each sub-board, and 30mA 'G' type for each final circuit. In this way, a failure of a device to detect the fault will eventually be cleared by a higher-level device, at the cost of interrupting more circuits.
Type (types of leakage current detected)
IEC Standard 60755 (General requirements for residual current operated protective devices) defines the following types of RCD depending on the waveforms and frequency of the fault current:
Type AC RCDs trip on alternating sinusoidal residual current, suddenly applied or smoothly increasing.
Type A RCDs trip on alternating sinusoidal residual current and on residual pulsating direct current, suddenly applied or smoothly increasing.
Type F RCDs trip in the same conditions as Type A and in addition:
For composite residual currents, whether suddenly applied or slowly rising, intended for circuit supplied between and neutral or and earthed middle conductor;
For residual pulsating direct currents superimposed on smooth direct current.
Type B RCDs trip in the same conditions as Type F and in addition:
For residual sinusoidal alternating currents up to 1kHz;
For residual alternating currents superimposed on a smooth direct current;
For residual pulsating direct currents superimposed on a smooth direct current;
For residual pulsating rectified direct current which results from two or more phases;
For residual smooth direct currents, whether suddenly applied or slowly increased, independent of polarity.
The BEAMA RCD Handbook notes that types F and B have been introduced because some designs of types AC and A can be disabled if a DC current is present that saturates the core of the detector.
Directionality
RCDs may be uni-directional or bi-directional. Bi-directional devices have recently been introduced to address the problem of traditional uni-directional devices being unsuitable for certain configurations of home generation systems (PV).
Surge current resistance
The surge current refers to the peak current an RCD is designed to withstand using a test impulse of specified characteristics. The IEC 61008 and IEC 61009 standards require that RCDs withstand a 200A "ring wave" impulse. The standards also require RCDs classified as "selective" to withstand a 3000A impulse surge current of specified waveform.
Testing of correct operation
RCDs can and should be tested with a built-in test button to confirm basic functionality on a regular basis. If the switch mechanism is not operated for a long period then they can become liable to getting stuck. This is not generally a problem for an overcurrent circuit breaker because the force from the amount of current involved with those when they trip can be sufficient to break them free if stuck, however an RCD is designed to trip on a very small amount of current which can excerpt far too weak a force to break a stuck switch free, thus failing to operate the safety device. By operating the test button on a regular basis it can be seen whether or not a device is getting stuck. If so then manually operating the switch a few times may free it up temporarily and replacement can be considered. More thorough testing performed by a suitably competent person as part of a periodic test of an electrical installation might include checking what amount of current is required to make each device trip, and how quickly they trip, to check that they are performing within specification.
Limitations
A residual-current circuit breaker cannot remove all risk of electric shock or fire. In particular, an RCD alone will not detect overload conditions, phase-to-neutral short circuits or phase-to-phase short circuits (see three-phase electric power). Over-current protection (fuses or circuit breakers) must be provided. Circuit breakers that combine the functions of an RCD with overcurrent protection respond to both types of fault. These are known as RCBOs and are available in 2-, 3- and 4-pole configurations. RCBOs will typically have separate circuits for detecting current imbalance and for overload current but use a common interrupting mechanism. Some RCBOs have separate levers for residual-current and over-current protection or use a separate indicator for ground faults.
An RCD helps to protect against electric shock when current flows through a person from a phase ( / hot) to earth. It cannot protect against electric shock when current flows through a person from phase to neutral or from phase to phase, for example where a finger touches both and neutral contacts in a light fitting; a device cannot differentiate between current flow through an intended load from flow through a person, though the RCD may still trip if the person is in contact with the ground (earth), as some current may still pass through the persons finger and body to earth.
Whole installations on a single RCD, common in older installations in the UK, are prone to "nuisance" trips that can cause secondary safety problems with loss of lighting and defrosting of food. Frequently the trips are caused by deteriorating insulation on heater elements, such as water heaters and cooker elements or rings. Although regarded as a nuisance, the fault is with the deteriorated element and not the RCD: replacement of the offending element will resolve the problem, but replacing the RCD will not.
RCDs are not selective, for example when a ground fault occurs on a circuit protected by a 30 mA IΔn RCD in series with a 300 mA IΔn RCD either or both may trip. Special time-delayed types are available to provide selectivity in such installations.
In the case of RCDs that need a power supply, a dangerous condition can arise if the neutral wire is broken or switched off on the supply side of the RCD, while the corresponding conductor remains uninterrupted. The tripping circuit needs power to work and does not trip when the power supply fails. Connected equipment will not work without a neutral, but the RCD cannot protect people from contact with the energized wire. For this reason circuit breakers must be installed in a way that ensures that the neutral wire cannot be switched off unless the conductor is also switched off at the same time. Where there is a requirement for switching off the neutral wire, two-pole breakers (or four-pole for 3-phase) must be used. To provide some protection with an interrupted neutral, some RCDs and RCBOs are equipped with an auxiliary connection wire that must be connected to the earth busbar of the distribution board. This either enables the device to detect the missing neutral of the supply, causing the device to trip, or provides an alternative supply path for the tripping circuitry, enabling it to continue to function normally in the absence of the supply neutral.
Related to this, a single-pole RCD/RCBO interrupts the energized conductor only, while a double-pole device interrupts both the energized and return conductors. Usually this is a standard and safe practice, since the return conductor is held at ground potential anyway. However, because of its design, a single-pole RCD will not isolate or disconnect all relevant wires in certain uncommon situations, for example where the return conductor is not being held, as expected, at ground potential, or where current leakage occurs between the return and earth conductors. In these cases, a double-pole RCD will offer protection, since the return conductor would also be disconnected.
History and nomenclature
The world's first high-sensitivity earth leakage protection system (i.e. a system capable of protecting people from the hazards of direct contact between a conductor and earth), was a second-harmonic magnetic amplifier core-balance system, known as the magamp, developed in South Africa by Henri Rubin. Electrical hazards were of great concern in South African gold mines, and Rubin, an engineer at the company C.J. Fuchs Electrical Industries of Alberton Johannesburg, initially developed a cold-cathode system in 1955 which operated at 525V and had a tripping sensitivity of 250mA. Prior to this, core balance earth leakage protection systems operated at sensitivities of about 10A.
The cold cathode system was installed in a number of gold mines and worked reliably. However, Rubin began working on a completely novel system with greatly improved sensitivity, and by early 1956, he had produced a prototype second-harmonic magnetic amplifier-type core balance system (South African Patent No. 2268/56 and Australian Patent No. 218360). The prototype magamp was rated at 220V, 60A and had an internally adjustable tripping sensitivity of 12.5–17.5mA. Very rapid tripping times were achieved through a novel design, and this combined with the high sensitivity was well within the safe current–time envelope for ventricular fibrillation determined by Charles Dalziel of the University of California, Berkeley, United States, who had estimated electrical shock hazards in humans. This system, with its associated circuit breaker, included overcurrent and short-circuit protection. In addition, the original prototype was able to trip at a lower sensitivity in the presence of an interrupted neutral, thus protecting against an important cause of electrical fire.
Following the accidental electrocution of a woman in a domestic accident at the Stilfontein gold mining village near Johannesburg, a few hundred F.W.J. 20mA magamp earth leakage protection units were installed in the homes of the mining village during 1957 and 1958. F.W.J. Electrical Industries, which later changed its name to FW Electrical Industries, continued to manufacture 20mA single phase and three phase magamp units.
At the time that he worked on the magamp, Rubin also considered using transistors in this application, but concluded that the early transistors then available were too unreliable. However, with the advent of improved transistors, the company that he worked for and other companies later produced transistorized versions of earth leakage protection.
In 1961, Dalziel, working with Rucker Manufacturing Co., developed a transistorized device for earth leakage protection which became known as a ground fault circuit interrupter (GFCI), sometimes colloquially shortened to ground fault interrupter (GFI). This name for high-sensitivity earth leakage protection is still in common use in the United States.
In the early 1970s most North American GFCI devices were of the circuit breaker type. GFCIs built into the outlet receptacle became commonplace beginning in the 1980s. The circuit breaker type, installed into a distribution panel, suffered from accidental trips mainly caused by poor or inconsistent insulation on the wiring. False trips were frequent when insulation problems were compounded by long circuit lengths. So much current leaked along the length of the conductors' insulation that the breaker might trip with the slightest increase of current imbalance. The migration to outlet-receptacle–based protection in North American installations reduced the accidental trips and provided obvious verification that wet areas were under electrical-code–required protection. European installations continue to use primarily RCDs installed at the distribution board, which provides protection in case of damage to fixed wiring. In Europe socket-based RCDs are primarily used for retrofitting.
Regulation and adoption
Regulations differ widely from country to country. A single RCD installed for an entire electrical installation provides protection against shock hazards to all circuits, however, any fault may cut all power to the premises. A solution is to create groups of circuits, each with an RCD, or to use an RCBO for each individual circuit.
Australia
In Australia, residual current devices have been mandatory on power circuits since 1991 and on light circuits since 2000. In Queensland specifically, residual power devices have been compulsory for all new homes since 1992.
A minimum of two RCDs is required per domestic installation. All socket outlets and lighting circuits are to be distributed over circuit RCDs. A maximum of three subcircuits only, may be connected to a single RCD. In Australia, the RCD testing procedure must meet a set standard – this is the AS/NZS 3760:2010 in-service safety inspection and testing of electrical equipment.
Austria
Austria regulated residual current devices in the ÖVE E8001-1/A1:2013-11-01 norm (most recent revision). It has been required in private housing since 1980. The maximum activation time must not exceed 0.4 seconds. It needs to be installed on all circuits with power plugs with a maximum leakage current of 30mA and a maximum rated current of 16A.
Additional requirements are placed on circuits in wet areas, construction sites and commercial buildings.
Belgium
Belgian domestic installations are required to be equipped with a 300mA residual current device that protects all circuits. Furthermore, at least one 30mA residual current device is required that protects all circuits in "wet rooms" (e.g. bathroom, kitchen) as well as circuits that power certain "wet" appliances (washing machine, tumble dryer, dishwasher). Electrical underfloor heating is required to be protected by a 100mA RCD. These RCDs must be of type A.
Brazil
Since NBR 5410 (1997) residual current devices and grounding are required for new construction or repair in wet areas, outdoor areas, interior outlets used for external appliances, or in areas where water is more probable like bathrooms and kitchens.
Denmark
Denmark requires 30mA RCDs on all circuits that are rated for less than 20 A (circuits at greater rating are mostly used for distribution). RCDs became mandatory in 1975 for new buildings, and then for all buildings in 2008.
France
According to the NF C 15-100 regulation (1911 -> 2002), a general RCD not exceeding 100 to 300mA at the origin of the installation is mandatory. Moreover, all circuits must also include 30mA protections in the user's distribution board, with each RCD protecting up to 8 circuit breakers, usually on the same DIN rail (electric panels of 1 to 4 DIN rails are the norm for residential). Before 1991, this 30mA protection was mandatory only in rooms where there is water, high power or sensitive equipment (bathrooms, kitchens, IT...). The type of RCD required (A, AC, F) depends upon the type of the equipment that will be connected and the maximum power of the socket outlet. Minimal distances between electrical devices and water or the floor are described and mandatory.
Germany
Since 1 May 1984, RCDs are mandatory for all rooms with a bath tub or a shower. Since June 2007 Germany requires the use of RCDs with a trip current of no more than 30mA on sockets rated up to 32A which are for general use. (DIN Verband der Elektrotechnik, Elektronik und Informationstechnik (VDE) 0100-410 Nr. 411.3.3). It is not allowed to use type "AC" RCDs since 1987, to be used to protect humans against electrical shocks. It must be Type "A" or type "B".
India
According to Regulation 36 of the Electricity Regulations 1990
a) For a place of public entertainment, protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 10mA.
b) For a place where the floor is likely to be wet or where the wall or enclosure is of low electrical resistance, protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 10mA.
c) For an installation where hand-held equipment, apparatus or appliance is likely to be used, protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 30mA.
d) For an installation other than the installation in (a), (b) and (c), protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 100mA.
Italy
The Italian law (n. 46 March 1990) prescribes RCDs with no more than 30mA residual current (informally called "salvavita"—life saver, after early BTicino models, or differential circuit breaker for the mode of operation) for all domestic installations to protect all the lines. The law was recently updated to mandate at least two separate RCDs for separate domestic circuits. Short-circuit and overload protection has been compulsory since 1968.
Malaysia
In the latest guidelines for electrical wiring in residential buildings (2008) handbook, the overall residential wiring need to be protected by a residual current device of sensitivity not exceeding 100mA. Additionally, all power sockets need to be protected by a residual current device of sensitivity not exceeding 30mA and all equipment in wet places (water heater, water pump) need to be protected by a residual current device of sensitivity not exceeding 10mA.
New Zealand
From January 2003, all new circuits originating at the switchboard supplying lighting or socket outlets (power points) in domestic buildings must have RCD protection. Residential facilities (such as boarding houses, hospitals, hotels and motels) will also require RCD protection for all new circuits originating at the switchboard supplying socket outlets. These RCDs will normally be located at the switchboard. They will provide protection for all electrical wiring and appliances plugged into the new circuits.
North America
In North America socket-outlets located in places where an easy path to ground exists—such as wet areas and rooms with uncovered concrete floors—must be protected by a GFCI. The US National Electrical Code has required devices in certain locations to be protected by GFCIs since the 1960s. Beginning with underwater swimming pool lights (1968) successive editions of the code have expanded the areas where GFCIs are required to include: construction sites (1974), bathrooms and outdoor areas (1975), garages (1978), areas near hot tubs or spas (1981), hotel bathrooms (1984), kitchen counter sockets (1987), crawl spaces and unfinished basements (1990), near wet bar sinks (1993), near laundry sinks (2005), in laundry rooms (2014) and in kitchens (2023).
GFCIs are commonly available as an integral part of a socket or a circuit breaker installed in the distribution panelboard. GFCI sockets invariably have rectangular faces and accept so-called Decora face plates, and can be mixed with regular outlets or switches in a multi-gang box with standard cover plates. In both Canada and the US older two-wire, ungrounded NEMA 1 sockets may be replaced with NEMA 5 sockets protected by a GFCI (integral with the socket or with the corresponding circuit breaker) in lieu of rewiring the entire circuit with a grounding conductor. In such cases the sockets must be labeled "no equipment ground" and "GFCI protected"; GFCI manufacturers typically provide tags for the appropriate installation description.
GFCIs approved for protection against electric shock trip at 5mA within 25ms. A GFCI device which protects equipment (not people) is allowed to trip as high as 30mA of current; this is known as an Equipment Protective Device (EPD). RCDs with trip currents as high as 500mA are sometimes deployed in environments (such as computing centers) where a lower threshold would carry an unacceptable risk of accidental trips. These high-current RCDs serve for equipment and fire protection instead of protection against the risks of electrical shocks.
In the United States the American Boat and Yacht Council requires both GFCIs for outlets and Equipment Leakage Circuit Interrupters (ELCI) for the entire boat. The difference is GFCIs trip on 5mA of current whereas ELCIs trip on 30mA after up to 100ms. The greater values are intended to provide protection while minimizing nuisance trips.
Norway
In Norway, it has been required in all new homes since 2002, and on all new sockets since 2006. This applies to 32A sockets and below. The RCD must trigger after a maximum 0.4 seconds for 230V circuits, or 0.2 seconds for 400V circuits.
South Africa
South Africa mandated the use of Earth Leakage Protection devices in residential environments (e.g. houses, flats, hotels, etc.) from October 1974, with regulations being refined in 1975 and 1976.
Devices need to be installed in new premises and when repairs are carried out. Protection is required for power outlets and lighting, with the exception of emergency lighting that should not be interrupted. The standard device used in South Africa is indeed a hybrid of ELPD and RCCB.
Switzerland
According to the NIBT regulation, the use of RCD type AC is forbidden (since 2010).
Taiwan
Taiwan requires circuits of receptacles in washrooms, balconies, and receptacles in kitchen no more than 1.8 metres from the sink the use of earth leakage circuit breakers. This requirement also apply to circuit of water heater in washrooms and circuits that involves devices in water, lights on metal frames, public drinking fountains and so on. In principle, ELCBs should be installed on branch circuits, with trip current no more than 30mA within 0.1 second according to Taiwanese law.
Turkey
Turkey requires the use of RCDs with no more than 30mA and 300mA in all new homes since 2004. This rule was introduced in RG-16/06/2004-25494.
United Kingdom
The current (18th) edition of the IET Electrical Wiring Regulations requires that all socket outlets in most installations have RCD protection, though there are exemptions. Non armoured cables buried in walls must also be RCD protected (again with some specific exemptions). Provision of RCD protection for circuits present in bathrooms and shower rooms reduces the requirement for supplementary bonding in those locations. Two RCDs may be used to cover the installation, with upstairs and downstairs lighting and power circuits spread across both RCDs. When one RCD trips, power is maintained to at least one lighting and power circuit. Other arrangements, such as the use of RCBOs, may be employed to meet the regulations. The new requirements for RCDs do not affect most existing installations unless they are rewired, the distribution board is changed, a new circuit is installed, or alterations are made such as additional socket outlets or new cables buried in walls.
RCDs used for shock protection must be of the 'immediate' operation type (not time-delayed) and must have a residual current sensitivity of no greater than 30mA.
If spurious tripping would cause a greater problem than the risk of the electrical accident the RCD is supposed to prevent (examples might be a supply to a critical factory process, or to life support equipment), RCDs may be omitted, providing affected circuits are clearly labelled and the balance of risks considered; this may include the provision of alternative safety measures.
The previous edition of the regulations required use of RCDs for socket outlets that were liable to be used by outdoor appliances. Normal practice in domestic installations was to use a single RCD to cover all the circuits requiring RCD protection (typically sockets and showers) but to have some circuits (typically lighting) not RCD protected. This was to avoid a potentially dangerous loss of lighting should the RCD trip. Protection arrangements for other circuits varied. To implement this arrangement it was common to install a consumer unit incorporating an RCD in what is known as a split load configuration, where one group of circuit breakers is supplied direct from the main switch (or time delay RCD in the case of a TT earth) and a second group of circuits is supplied via the RCD. This arrangement had the recognised problems that cumulative earth leakage currents from the normal operation of many items of equipment could cause spurious tripping of the RCD, and that tripping of the RCD would disconnect power from all the protected circuits.
| Technology | Electrical protective devices | null |
530862 | https://en.wikipedia.org/wiki/Benzocaine | Benzocaine | Benzocaine, sold under the brand name Orajel amongst others, is a local anesthetic, belonging to the amino ester drug class, commonly used as a topical painkiller or in cough drops. It is the active ingredient in many over-the-counter anesthetic ointments such as products for oral ulcers. It is combined with antipyrine to form A/B ear drops. In the US, products containing benzocaine for oral application are contraindicated in children younger than two years old. In the European Union, the contraindication applies to children under 12 years of age.
It was first synthesised in 1890 in Germany and approved for medical use in 1902.
Medical uses
Benzocaine is indicated to treat a variety of pain-related conditions. It may be used for:
Local anesthesia of oral and pharyngeal mucous membranes (sore throat, cold sores, mouth ulcers, toothache, sore gums, denture irritation)
Otic pain (earache)
Surgical or procedural local anesthesia
Relief of skin pain caused by sunburn, ingrown toenails, hemorrhoids,
Examples of combination medications of benzocaine include:
Antipyrine-benzocaine otic consists of antipyrine and benzocaine, and is used to relieve ear pain and remove earwax.
Cepacol consists of menthol and benzocaine, and is used to treat sore throat.
A solution of benzocaine and menthol is marketed for the treatment of bee stings, mosquito bites, jellyfish stings, and other insect bites
Other uses
Benzocaine is used as a key ingredient in numerous pharmaceuticals:
Some glycerol-based ear medications for use in removing excess wax as well as relieving ear conditions such as otitis media and swimmer's ear.
Some previous diet products such as Ayds.
Some condoms designed to prevent premature ejaculation. Benzocaine largely inhibits sensitivity on the penis, and can allow for an erection to be maintained longer (in a continuous act) by delaying ejaculation. Conversely, an erection will also fade faster if stimulus is interrupted.
Benzocaine mucoadhesive patches have been used in reducing orthodontic pain.
In Poland it is included, together with menthol and zinc oxide, in the liquid powder (not to be confused with the liquid face powder) used mainly after mosquito bites. Today's ready-made Pudroderm was once used there as pharmaceutical compound.
Available forms
Benzocaine can come in a variety of preparations including:
Oral preparations:
Lozenges (ex. Cepacol, Mycinettes)
Throat Spray (ex. Ultra Chloraseptic)
Topical preparations:
Aerosol (ex. Topex)
Gel (ex. Orajel, Kank-A)
Paste (ex. Orabase)
Cream (ex. Lanacane - active ingredient 3% Benzocaine)
Otic preparations:
Solution (ex. Allergen)
Side effects
Benzocaine is generally well tolerated and non-toxic when applied topically as recommended.
However, there have been reports of serious, life-threatening adverse effects (e.g., seizures, coma, irregular heart beat, respiratory depression) with over-application of topical products or when applying topical products that contain high concentrations of benzocaine to the skin.
Overapplication of oral anesthetics such as benzocaine can increase the risk of pulmonary aspiration by relaxing the gag-reflex and allowing regurgitated stomach contents or oral secretions to enter the airway. Applying an oral anesthetic and consuming beverages before going to bed can be particularly hazardous.
The topical use of higher concentration (10–20%) benzocaine products applied to the mouth or mucous membranes has been found to be a cause of methemoglobinemia, a disorder in which the amount of oxygen carried by the blood is greatly reduced. This side effect is most common in children under two years of age. As a result, the FDA has stated that benzocaine products should not be used in children under two years of age, unless directed by and supervised by a healthcare professional. In European countries, the contraindication applies to children under 12 years of age. Symptoms of methemoglobinemia usually occur within minutes to hours of applying benzocaine, and can occur upon the first-time use or after additional use.
Benzocaine may cause allergic reactions. These include:
Contact dermatitis (redness and itchiness)
Anaphylaxis (rare)
Pharmacology
Pharmacodynamics
Pain is caused by the stimulation of free nerve endings. When the nerve endings are stimulated, sodium enters the neuron, causing depolarization of the nerve and subsequent initiation of an action potential. The action potential is propagated down the nerve toward the central nervous system, which interprets this as pain. Benzocaine acts to inhibit the voltage-gated sodium channels (VGSCs) on the neuron membrane, stopping the propagation of the action potential.
Chemistry
Benzocaine is the ethyl ester of p-aminobenzoic acid (PABA). It can be prepared from PABA and ethanol by Fischer esterification or via the reduction of ethyl p-nitrobenzoate. Benzocaine is sparingly soluble in water; it is more soluble in dilute acids and very soluble in ethanol, chloroform, and ethyl ether. The melting point of benzocaine is 88–92 °C, and the boiling point is about 310 °C. The density of benzocaine is 1.17 g/cm3.
Benzocaine is commonly found, particularly in Britain, as an additive in street cocaine and also as a bulking agent in "legal highs". Benzocaine gives a numbing effect similar to cocaine and as a bulking and binding agent it can not be detected once mixed. It is the most popular cutting agent worldwide.
Treatment of benzocaine with hydrazine leads to aminostimil, a compound related to isoniazid.
Synthesis
Benzocaine can be prepared by esterification using 4-aminobenzoic acid and ethanol. It can also be prepared by reduction of ethyl 4-nitrobenzoate to the amine. In industrial practice, the reducing agent is usually iron and water in the presence of a little acid.
History
Benzocaine was first synthesized in 1890 by the German chemist Eduard Ritsert (1859–1946), in the town of Eberbach and introduced to the market in 1902 under the name "Anästhesin".
Veterinary medicine
Bath solutions of benzocaine and its derivatives are commonly used to anesthetize amphibians for surgery. Benzocaine-based anesthetics are potent and highly effective for both anesthesia and euthanasia in amphibians.
| Biology and health sciences | Anesthetics | Health |
530955 | https://en.wikipedia.org/wiki/Barcelona%20Metro | Barcelona Metro | The Barcelona Metro (Catalan and Spanish: ) is a rapid transit network that runs mostly underground in central Barcelona and into the city's suburbs. It is part of the larger public transport system of Barcelona, the capital of Catalonia, Spain, with unified fares under the (ATM) scheme. As of 2024, the network is operated by two separate companies: (TMB) and (FGC). It is made up of 12 lines, combining the lines owned by the two companies. Two lines, L9 and L10, are being built at present, with both lines having different sections of each opened between 2009 and 2018. They are due to be fully completed in 2030. Three lines on the network have opened as automatic train operation/driverless vehicle systems since 2009: Line 11 being converted to driverless first, and then Lines 9 and 10, opening up driverless.
It is one of only two metros worldwide to operate on three different track gauges, being on line 8, older Iberian gauge on line 1, and on the remaining lines; the other metro with three gauges being the Toei Subway in Tokyo, which uses two narrow gauges and standard gauge. It is the only metro worldwide to operate on both narrow and broad gauge tracks.
The network length is , with 183 stations, as of November 2021. It uses spare power from its regenerative braking to power charging stations in the vicinity of its infrastructure.
History
The first rapid transit railway service in Barcelona was founded in 1863 by the private company Ferrocarril de Barcelona a Sarrià ("Railway from Barcelona to Sarrià", after 1916 Sarrià joined the municipality of Barcelona). Later this line evolved in what now is basically the current L6 metro service. This railway system, now part of the Ferrocarrils de la Generalitat de Catalunya company, was later inspired by the London Underground naming style having long names for the lines ("Sarrià line", "Balmes line"...).
Much later, in the 1920s, a second and a third rapid transit railway systems were founded with the construction of the Gran Metro between Lesseps and the Plaça de Catalunya (part of the modern L3) and, two years later, the Metro Transversal (now part of L1). This third one was built between the Plaça de Catalunya and la Bordeta to link the city centre with the Plaça d'Espanya and Montjuïc, the site of the 1929 Barcelona International Exposition. These two later rapid transit companies contrasted with the first one in being inspired by the Métropolitain de Paris (named after the Metropolitan Railway, from where the word "metro" comes).
As of 2022, the network consists of 12 lines managed by 2 different operators: Transports Metropolitans de Barcelona (TMB) and Ferrocarrils de la Generalitat de Catalunya (FGC, or Catalan Government Railways). Fares and nomenclature are controlled by the Autoritat del Transport Metropolità, a citywide system that also includes local and regional buses, tramways and some commuter and regional train services.
Network
Since early 2020, the total length of the network is long and 189 stations, including the TMB+FGC+Montjuïc funicular.
The major network, operated by TMB, consists of eight lines, numbered L1 to L5 and L9 to L11 (which are distinguished on network maps by different colours), covering of route and 141 stations.
FGC lines are numbered L6, L7, L8 and L12. These lines, except all of L12 and part of L7, share tracks with commuter rail lines.
The Barcelona Metro lines do not have a name of their own but are generally referred to by their colour or by the number and the names of their termini.
Lines
The lines run as follows:
In addition to those, Renfe and FGC trains and the increasingly important tram routes and stations are displayed on most recent maps, including the info maps in the metro stations, all in a single variety of dark green.
L9 and L10
Construction work is taking place currently on L9/L10, which when finished will run from Badalona and Santa Coloma de Gramenet to the Zona Franca district and El Prat International Airport. The lines, which share a central section between Bon Pastor and Can Tries | Gornal, will be the longest automated metro line in Europe, at , and combined will have 52 stations. The project was approved in 2000 but has been challenged by some technical difficulties and some of their sections are pending further geological analysis. The first section of Line 9 that runs between La Sagrera and Can Zam opened in 2009, and by June 2010 eleven new stations on the new Lines L9 and L10 had opened. As of February 2016, the 15-station, south section of Line L9 between Zona Universitària and the airport (Aeroport T1 station) opened.
Rolling stock
Tickets and pricing
In addition to the one-way ticket there are a number of other tickets and cards. All of the Autoritat del Transport Metropolità (ATM) transport cards are valid and can be used in the Barcelona Metro. These are:
Airport Ticket, is a one-way ticket for a journey between Aeroport T1 and Aeroport T2 stations on metro line L9 Sud and the rest of the metro network. Standard metro tickets such as single tickets are not valid for a trip to the airport.
T casual, which includes ten rides at a discounted price
T usual, unlimited journeys made in 30 consecutive days from the first use
T-16, unlimited journeys for children below 16
All of the metro stations are within fare zone 1.
Stations
At the end of 2018, there are 187 operational stations in the Barcelona Metro, served by the 12 lines in current use. The average distance between 2 stations is 807.50 metres.
An overwhelming majority of stations in the network lack related buildings or structures aboveground, mostly consisting of an access with stairs, escalators or elevators. The official TMB metro indicator, a red rhombus with a M inside, remains unused by FGC lines, which use their company logo and a different rhombus-shaped logo (actually rather similar to the one used inside the Madrid Metro) inside stations. Below ground their decoration is remarkably sober, with the exception of the new stations.
Disused stations
A number of stations in the network have been closed, were never inaugurated, or have been moved to a nearby location. See the main article for more details.
Accessibility
Accessibility for passengers with reduced mobility is nearing completion. , 8 out of 192 stations are not fully accessible.
The non accessible stations are:
Ciutadella | Vila Olímpica (L4)
Clot (L1)
Espanya (L1/L3) - The FGC Plaça Espanya station (L8 and suburban lines) is accessible.
Maragall (L4/L5)
Plaça de Sants (L1/L5)
Urquinaona (L1/L4)
Verdaguer (L4/L5)
Virrei Amat (L5)
Lines L2, L6, L7, L8, L9 Nord, L9 Sud, L10 Nord, L10 Sud, L11 and L12 are fully accessible.
Non accessible connections (in both directions):
Catalunya L1/Rodalies (commuter/regional) to/from L3/FGC (metro L6/L7 and commuter)
Passeig de Gràcia L2/L4 to/from L3/Rodalies (commuter/regional)
Clot L1 to L2 in both directions (the Clot L2 station is accessible).
Ciutadella | Vila Olímpica L4 to/from Trambesòs
For up to date info check the official sites of TMB and FGC
Transportation in the Metropolitan Area of Barcelona
The Barcelona Metro is part of a larger transportation network, regulated and fare-integrated by Autoritat del Transport Metropolità.
Among these services, there are two large systems which operate both inside and outside the city limits of Barcelona: the commuter train lines operated by Renfe, amalgamated in the Rodalies Barcelona, or Ferrocarrils de la Generalitat de Catalunya lines which start in the metro lines the company operates (L6, L7 and L8) and which become a fully-fledged railway system which serves most of the metropolitan area: list of FGC lines. FGC is developing metros for Sabadell and Terrassa - see Barcelona–Vallès Line.
Network map
In popular culture
The Spanish psychological horror film "Estación Rocafort" prominently features the Barcelona subway, with the Rocafort station serving as a key setting for much of the plot. The movie draws inspiration from the dark legend surrounding the Rocafort Station. Directed by Luis Prieto, the film stars Natalia Azahara alongside Javier Gutiérrez, Valèria Sorolla and Albert Baró.
| Technology | Spain | null |
531239 | https://en.wikipedia.org/wiki/Rotational%20spectroscopy | Rotational spectroscopy | Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. The rotational spectrum (power spectral density vs. rotational frequency) of polar molecules can be measured in absorption or emission by microwave spectroscopy or by far infrared spectroscopy. The rotational spectra of non-polar molecules cannot be observed by those methods, but can be observed and measured by Raman spectroscopy. Rotational spectroscopy is sometimes referred to as pure rotational spectroscopy to distinguish it from rotational-vibrational spectroscopy where changes in rotational energy occur together with changes in vibrational energy, and also from ro-vibronic spectroscopy (or just vibronic spectroscopy) where rotational, vibrational and electronic energy changes occur simultaneously.
For rotational spectroscopy, molecules are classified according to symmetry into spherical tops, linear molecules, and symmetric tops; analytical expressions can be derived for the rotational energy terms of these molecules. Analytical expressions can be derived for the fourth category, asymmetric top, for rotational levels up to J=3, but higher energy levels need to be determined using numerical methods. The rotational energies are derived theoretically by considering the molecules to be rigid rotors and then applying extra terms to account for centrifugal distortion, fine structure, hyperfine structure and Coriolis coupling. Fitting the spectra to the theoretical expressions gives numerical values of the angular moments of inertia from which very precise values of molecular bond lengths and angles can be derived in favorable cases. In the presence of an electrostatic field there is Stark splitting which allows molecular electric dipole moments to be determined.
An important application of rotational spectroscopy is in exploration of the chemical composition of the interstellar medium using radio telescopes.
Applications
Rotational spectroscopy has primarily been used to investigate fundamental aspects of molecular physics. It is a uniquely precise tool for the determination of molecular structure in gas-phase molecules. It can be used to establish barriers to internal rotation such as that associated with the rotation of the group relative to the group in chlorotoluene (). When fine or hyperfine structure can be observed, the technique also provides information on the electronic structures of molecules. Much of current understanding of the nature of weak molecular interactions such as van der Waals, hydrogen and halogen bonds has been established through rotational spectroscopy. In connection with radio astronomy, the technique has a key role in exploration of the chemical composition of the interstellar medium. Microwave transitions are measured in the laboratory and matched
to emissions from the interstellar medium using a radio telescope. was the first stable polyatomic molecule to be identified in the interstellar medium. The measurement of chlorine monoxide is important for atmospheric chemistry. Current projects in astrochemistry involve both laboratory microwave spectroscopy and observations made using modern radiotelescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA).
Overview
A molecule in the gas phase is free to rotate relative to a set of mutually orthogonal axes of fixed orientation in space, centered on the center of mass of the molecule. Free rotation is not possible for molecules in liquid or solid phases due to the presence of intermolecular forces. Rotation about each unique axis is associated with a set of quantized energy levels dependent on the moment of inertia about that axis and a quantum number. Thus, for linear molecules the energy levels are described by a single moment of inertia and a single quantum number, , which defines the magnitude of the rotational angular momentum.
For nonlinear molecules which are symmetric rotors (or symmetric tops - see next section), there are two moments of inertia and the energy also depends on a second rotational quantum number, , which defines the vector component of rotational angular momentum along the principal symmetry axis. Analysis of spectroscopic data with the expressions detailed below results in quantitative determination of the value(s) of the moment(s) of inertia. From these precise values of the molecular structure and dimensions may be obtained.
For a linear molecule, analysis of the rotational spectrum provides values for the rotational constant and the moment of inertia of the molecule, and, knowing the atomic masses, can be used to determine the bond length directly. For diatomic molecules this process is straightforward. For linear molecules with more than two atoms it is necessary to measure the spectra of two or more isotopologues, such as 16O12C32S and 16O12C34S. This allows a set of simultaneous equations to be set up and solved for the bond lengths). A bond length obtained in this way is slightly different from the equilibrium bond length. This is because there is zero-point energy in the vibrational ground state, to which the rotational states refer, whereas the equilibrium bond length is at the minimum in the potential energy curve. The relation between the rotational constants is given by
where v is a vibrational quantum number and α is a vibration-rotation interaction constant which can be calculated if the B values for two different vibrational states can be found.
For other molecules, if the spectra can be resolved and individual transitions assigned both bond lengths and bond angles can be deduced. When this is not possible, as with most asymmetric tops, all that can be done is to fit the spectra to three moments of inertia calculated from an assumed molecular structure. By varying the molecular structure the fit can be improved, giving a qualitative estimate of the structure. Isotopic substitution is invaluable when using this approach to the determination of molecular structure.
Classification of molecular rotors
In quantum mechanics the free rotation of a molecule is quantized, so that the rotational energy and the angular momentum can take only certain fixed values, which are related simply to the moment of inertia, , of the molecule. For any molecule, there are three moments of inertia: , and about three mutually orthogonal axes A, B, and C with the origin at the center of mass of the system. The general convention, used in this article, is to define the axes such that , with axis corresponding to the smallest moment of inertia. Some authors, however, define the axis as the molecular rotation axis of highest order.
The particular pattern of energy levels (and, hence, of transitions in the rotational spectrum) for a molecule is determined by its symmetry. A convenient way to look at the molecules is to divide them into four different classes, based on the symmetry of their structure. These are
Selection rules
Microwave and far-infrared spectra
Transitions between rotational states can be observed in molecules with a permanent electric dipole moment. A consequence of this rule is that no microwave spectrum can be observed for centrosymmetric linear molecules such as (dinitrogen) or HCCH (ethyne), which are non-polar. Tetrahedral molecules such as (methane), which have both a zero dipole moment and isotropic polarizability, would not have a pure rotation spectrum but for the effect of centrifugal distortion; when the molecule rotates about a 3-fold symmetry axis a small dipole moment is created, allowing a weak rotation spectrum to be observed by microwave spectroscopy.
With symmetric tops, the selection rule for electric-dipole-allowed pure rotation transitions is , . Since these transitions are due to absorption (or emission) of a single photon with a spin of one, conservation of angular momentum implies that the molecular angular momentum can change by at most one unit. Moreover, the quantum number K is limited to have values between and including +J to -J.
Raman spectra
For Raman spectra the molecules undergo transitions in which an incident photon is absorbed and another scattered photon is emitted. The general selection rule for such a transition to be allowed is that the molecular polarizability must be anisotropic, which means that it is not the same in all directions. Polarizability is a 3-dimensional tensor that can be represented as an ellipsoid. The polarizability ellipsoid of spherical top molecules is in fact spherical so those molecules show no rotational Raman spectrum. For all other molecules both Stokes and anti-Stokes lines can be observed and they have similar intensities due to the fact that many rotational states are thermally populated. The selection rule for linear molecules is ΔJ = 0, ±2. The reason for the values ±2 is that the polarizability returns to the same value twice during a rotation. The value ΔJ = 0 does not correspond to a molecular transition but rather to Rayleigh scattering in which the incident photon merely changes direction.
The selection rule for symmetric top molecules is
ΔK = 0
If K = 0, then ΔJ = ±2
If K ≠ 0, then ΔJ = 0, ±1, ±2
Transitions with ΔJ = +1 are said to belong to the R series, whereas transitions with belong to an S series. Since Raman transitions involve two photons, it is possible for the molecular angular momentum to change by two units.
Units
The units used for rotational constants depend on the type of measurement. With infrared spectra in the wavenumber scale (), the unit is usually the inverse centimeter, written as cm−1, which is literally the number of waves in one centimeter, or the reciprocal of the wavelength in centimeters (). On the other hand, for microwave spectra in the frequency scale (), the unit is usually the gigahertz. The relationship between these two units is derived from the expression
where ν is a frequency, λ is a wavelength and c is the velocity of light. It follows that
As 1 GHz = 109 Hz, the numerical conversion can be expressed as
Effect of vibration on rotation
The population of vibrationally excited states follows a Boltzmann distribution, so low-frequency vibrational states are appreciably populated even at room temperatures. As the moment of inertia is higher when a vibration is excited, the rotational constants (B) decrease. Consequently, the rotation frequencies in each vibration state are different from each other. This can give rise to "satellite" lines in the rotational spectrum. An example is provided by cyanodiacetylene, H−C≡C−C≡C−C≡N.
Further, there is a fictitious force, Coriolis coupling, between the vibrational motion of the nuclei in the rotating (non-inertial) frame. However, as long as the vibrational quantum number does not change (i.e., the molecule is in only one state of vibration), the effect of vibration on rotation is not important, because the time for vibration is much shorter than the time required for rotation. The Coriolis coupling is often negligible, too, if one is interested in low vibrational and rotational quantum numbers only.
Effect of rotation on vibrational spectra
Historically, the theory of rotational energy levels was developed to account for observations of vibration-rotation spectra of gases in infrared spectroscopy, which was used before microwave spectroscopy had become practical. To a first approximation, the rotation and vibration can be treated as separable, so the energy of rotation is added to the energy of vibration. For example, the rotational energy levels for linear molecules (in the rigid-rotor approximation) are
In this approximation, the vibration-rotation wavenumbers of transitions are
where and are rotational constants for the upper and lower vibrational state respectively, while and are the rotational quantum numbers of the upper and lower levels. In reality, this expression has to be modified for the effects of anharmonicity of the vibrations, for centrifugal distortion and for Coriolis coupling.
For the so-called R branch of the spectrum, so that there is simultaneous excitation of both vibration and rotation. For the P branch, so that a quantum of rotational energy is lost while a quantum of vibrational energy is gained. The purely vibrational transition, , gives rise to the Q branch of the spectrum. Because of the thermal population of the rotational states the P branch is slightly less intense than the R branch.
Rotational constants obtained from infrared measurements are in good accord with those obtained by microwave spectroscopy, while the latter usually offers greater precision.
Structure of rotational spectra
Spherical top
Spherical top molecules have no net dipole moment. A pure rotational spectrum cannot be observed by absorption or emission spectroscopy because there is no permanent dipole moment whose rotation can be accelerated by the electric field of an incident photon. Also the polarizability is isotropic, so that pure rotational transitions cannot be observed by Raman spectroscopy either. Nevertheless, rotational constants can be obtained by ro–vibrational spectroscopy. This occurs when a molecule is polar in the vibrationally excited state. For example, the molecule methane is a spherical top but the asymmetric C-H stretching band shows rotational fine structure in the infrared spectrum, illustrated in rovibrational coupling. This spectrum is also interesting because it shows clear evidence of Coriolis coupling in the asymmetric structure of the band.
Linear molecules
The rigid rotor is a good starting point from which to construct a model of a rotating molecule. It is assumed that component atoms are point masses connected by rigid bonds. A linear molecule lies on a single axis and each atom moves on the surface of a sphere around the centre of mass. The two degrees of rotational freedom correspond to the spherical coordinates θ and φ which describe the direction of the molecular axis, and the quantum state is determined by two quantum numbers J and M. J defines the magnitude of the rotational angular momentum, and M its component about an axis fixed in space, such as an external electric or magnetic field. In the absence of external fields, the energy depends only on J. Under the rigid rotor model, the rotational energy levels, F(J), of the molecule can be expressed as,
where is the rotational constant of the molecule and is related to the moment of inertia of the molecule. In a linear molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, that is, , so
For a diatomic molecule
where m1 and m2 are the masses of the atoms and d is the distance between them.
Selection rules dictate that during emission or absorption the rotational quantum number has to change by unity; i.e., . Thus, the locations of the lines in a rotational spectrum will be given by
where denotes the lower level and denotes the upper level involved in the transition.
The diagram illustrates rotational transitions that obey the =1 selection rule. The dashed lines show how these transitions map onto features that can be observed experimentally. Adjacent transitions are separated by 2B in the observed spectrum. Frequency or wavenumber units can also be used for the x axis of this plot.
Rotational line intensities
The probability of a transition taking place is the most important factor influencing the intensity of an observed rotational line. This probability is proportional to the population of the initial state involved in the transition. The population of a rotational state depends on two factors. The number of molecules in an excited state with quantum number J, relative to the number of molecules in the ground state, NJ/N0 is given by the Boltzmann distribution as
,
where k is the Boltzmann constant and T the absolute temperature. This factor decreases as J increases. The second factor is the degeneracy of the rotational state, which is equal to . This factor increases as J increases. Combining the two factors
The maximum relative intensity occurs at
The diagram at the right shows an intensity pattern roughly corresponding to the spectrum above it.
Centrifugal distortion
When a molecule rotates, the centrifugal force pulls the atoms apart. As a result, the moment of inertia of the molecule increases, thus decreasing the value of , when it is calculated using the expression for the rigid rotor. To account for this a centrifugal distortion correction term is added to the rotational energy levels of the diatomic molecule.
where is the centrifugal distortion constant.
Therefore, the line positions for the rotational mode change to
In consequence, the spacing between lines is not constant, as in the rigid rotor approximation, but decreases with increasing rotational quantum number.
An assumption underlying these expressions is that the molecular vibration follows simple harmonic motion. In the harmonic approximation the centrifugal constant can be derived as
where k is the vibrational force constant. The relationship between and
where is the harmonic vibration frequency, follows. If anharmonicity is to be taken into account, terms in higher powers of J should be added to the expressions for the energy levels and line positions. A striking example concerns the rotational spectrum of hydrogen fluoride which was fitted to terms up to [J(J+1)]5.
Oxygen
The electric dipole moment of the dioxygen molecule, is zero, but the molecule is paramagnetic with two unpaired electrons so that there are magnetic-dipole allowed transitions which can be observed by microwave spectroscopy. The unit electron spin has three spatial orientations with respect to the given molecular rotational angular momentum vector, K, so that each rotational level is split into three states, J = K + 1, K, and K - 1, each J state of this so-called p-type triplet arising from a different orientation of the spin with respect to the rotational motion of the molecule. The energy difference between successive J terms in any of these triplets is about 2 cm−1 (60 GHz), with the single exception of J = 1←0 difference which is about 4 cm−1. Selection rules for magnetic dipole transitions allow transitions between successive members of the triplet (ΔJ = ±1) so that for each value of the rotational angular momentum quantum number K there are two allowed transitions. The 16O nucleus has zero nuclear spin angular momentum, so that symmetry considerations demand that K have only odd values.
Symmetric top
For symmetric rotors a quantum number J is associated with the total angular momentum of the molecule. For a given value of J, there is a 2J+1- fold degeneracy with the quantum number, M taking the values +J ...0 ... -J. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. In the absence of an external electrical field, the rotational energy of a symmetric top is a function of only J and K and, in the rigid rotor approximation, the energy of each rotational state is given by
where and for a prolate symmetric top molecule or for an oblate molecule.
This gives the transition wavenumbers as
which is the same as in the case of a linear molecule. With a first order correction for centrifugal distortion the transition wavenumbers become
The term in DJK has the effect of removing degeneracy present in the rigid rotor approximation, with different K values.
Asymmetric top
The quantum number J refers to the total angular momentum, as before. Since there are three independent moments of inertia, there are two other independent quantum numbers to consider, but the term values for an asymmetric rotor cannot be derived in closed form. They are obtained by individual matrix diagonalization for each J value. Formulae are available for molecules whose shape approximates to that of a symmetric top.
The water molecule is an important example of an asymmetric top. It has an intense pure rotation spectrum in the far infrared region, below about 200 cm−1. For this reason far infrared spectrometers have to be freed of atmospheric water vapour either by purging with a dry gas or by evacuation. The spectrum has been analyzed in detail.
Quadrupole splitting
When a nucleus has a spin quantum number, I, greater than 1/2 it has a quadrupole moment. In that case, coupling of nuclear spin angular momentum with rotational angular momentum causes splitting of the rotational energy levels. If the quantum number J of a rotational level is greater than I, levels are produced; but if J is less than I, levels result. The effect is one type of hyperfine splitting. For example, with 14N () in HCN, all levels with J > 0 are split into 3. The energies of the sub-levels are proportional to the nuclear quadrupole moment and a function of F and J. where , . Thus, observation of nuclear quadrupole splitting permits the magnitude of the nuclear quadrupole moment to be determined.
This is an alternative method to the use of nuclear quadrupole resonance spectroscopy. The selection rule for rotational transitions becomes
Stark and Zeeman effects
In the presence of a static external electric field the degeneracy of each rotational state is partly removed, an instance of a Stark effect. For example, in linear molecules each energy level is split into components. The extent of splitting depends on the square of the electric field strength and the square of the dipole moment of the molecule. In principle this provides a means to determine the value of the molecular dipole moment with high precision. Examples include carbonyl sulfide, OCS, with . However, because the splitting depends on μ2, the orientation of the dipole must be deduced from quantum mechanical considerations.
A similar removal of degeneracy will occur when a paramagnetic molecule is placed in a magnetic field, an instance of the Zeeman effect. Most species which can be observed in the gaseous state are diamagnetic . Exceptions are odd-electron molecules such as nitric oxide, NO, nitrogen dioxide, , some chlorine oxides and the hydroxyl radical. The Zeeman effect has been observed with dioxygen,
Rotational Raman spectroscopy
Molecular rotational transitions can also be observed by Raman spectroscopy. Rotational transitions are Raman-allowed for any molecule with an anisotropic polarizability which includes all molecules except for spherical tops. This means that rotational transitions of molecules with no permanent dipole moment, which cannot be observed in absorption or emission, can be observed, by scattering, in Raman spectroscopy. Very high resolution Raman spectra can be obtained by adapting a Fourier Transform Infrared Spectrometer. An example is the spectrum of . It shows the effect of nuclear spin, resulting in intensities variation of 3:1 in adjacent lines. A bond length of 109.9985 ± 0.0010 pm was deduced from the data.
Instruments and methods
The great majority of contemporary spectrometers use a mixture of commercially available and bespoke components which users integrate according to their particular needs. Instruments can be broadly categorised according to their general operating principles. Although rotational transitions can be found across a very broad region of the electromagnetic spectrum, fundamental physical constraints exist on the operational bandwidth of instrument components. It is often impractical and costly to switch to measurements within an entirely different frequency region. The instruments and operating principals described below are generally appropriate to microwave spectroscopy experiments conducted at frequencies between 6 and 24 GHz.
Absorption cells and Stark modulation
A microwave spectrometer can be most simply constructed using a source of microwave radiation, an absorption cell into which sample gas can be introduced and a detector such as a superheterodyne receiver. A spectrum can be obtained by sweeping the frequency of the source while detecting the intensity of transmitted radiation. A simple section of waveguide can serve as an absorption cell. An important variation of the technique in which an alternating current is applied across electrodes within the absorption cell results in a modulation of the frequencies of rotational transitions. This is referred to as Stark modulation and allows the use of phase-sensitive detection methods offering improved sensitivity. Absorption spectroscopy allows the study of samples that are thermodynamically stable at room temperature. The first study of the microwave spectrum of a molecule () was performed by Cleeton & Williams in 1934. Subsequent experiments exploited powerful sources of microwaves such as the klystron, many of which were developed for radar during the Second World War. The number of experiments in microwave spectroscopy surged immediately after the war. By 1948, Walter Gordy was able to prepare a review of the results contained in approximately 100 research papers. Commercial versions of microwave absorption spectrometer were developed by Hewlett-Packard in the 1970s and were once widely used for fundamental research. Most research laboratories now exploit either Balle-Flygare or chirped-pulse Fourier transform microwave (FTMW) spectrometers.
Fourier transform microwave (FTMW) spectroscopy
The theoretical framework underpinning FTMW spectroscopy is analogous to that used to describe FT-NMR spectroscopy. The behaviour of the evolving system is described by optical Bloch equations. First, a short (typically 0-3 microsecond duration) microwave pulse is introduced on resonance with a rotational transition. Those molecules that absorb the energy from this pulse are induced to rotate coherently in phase with the incident radiation. De-activation of the polarisation pulse is followed by microwave emission that accompanies decoherence of the molecular ensemble. This free induction decay occurs on a timescale of 1-100 microseconds depending on instrument settings. Following pioneering work by Dicke and co-workers in the 1950s, the first FTMW spectrometer was constructed by Ekkers and Flygare in 1975.
Balle–Flygare FTMW spectrometer
Balle, Campbell, Keenan and Flygare demonstrated that the FTMW technique can be applied within a "free space cell" comprising an evacuated chamber containing a Fabry-Perot cavity. This technique allows a sample to be probed only milliseconds after it undergoes rapid cooling to only a few kelvins in the throat of an expanding gas jet. This was a revolutionary development because (i) cooling molecules to low temperatures concentrates the available population in the lowest rotational energy levels. Coupled with benefits conferred by the use of a Fabry-Perot cavity, this brought a great enhancement in the sensitivity and resolution of spectrometers along with a reduction in the complexity of observed spectra; (ii) it became possible to isolate and study molecules that are very weakly bound because there is insufficient energy available for them to undergo fragmentation or chemical reaction at such low temperatures. William Klemperer was a pioneer in using this instrument for the exploration of weakly bound interactions. While the Fabry-Perot cavity of a Balle-Flygare FTMW spectrometer can typically be tuned into resonance at any frequency between 6 and 18 GHz, the bandwidth of individual measurements is restricted to about 1 MHz. An animation illustrates the operation of this instrument which is currently the most widely used tool for microwave spectroscopy.
Chirped-Pulse FTMW spectrometer
Noting that digitisers and related electronics technology had significantly progressed since the inception of FTMW spectroscopy, B.H. Pate at the University of Virginia designed a spectrometer which retains many advantages of the Balle-Flygare FT-MW spectrometer while innovating in (i) the use of a high speed (>4 GS/s) arbitrary waveform generator to generate a "chirped" microwave polarisation pulse that sweeps up to 12 GHz in frequency in less than a microsecond and (ii) the use of a high speed (>40 GS/s) oscilloscope to digitise and Fourier transform the molecular free induction decay. The result is an instrument that allows the study of weakly bound molecules but which is able to exploit a measurement bandwidth (12 GHz) that is greatly enhanced compared with the Balle-Flygare FTMW spectrometer. Modified versions of the original CP-FTMW spectrometer have been constructed by a number of groups in the United States, Canada and Europe. The instrument offers a broadband capability that is highly complementary to the high sensitivity and resolution offered by the Balle-Flygare design.
| Physical sciences | Molecular physics | Physics |
531505 | https://en.wikipedia.org/wiki/Horseshoe%20bat | Horseshoe bat | Horseshoe bats are bats in the family Rhinolophidae. In addition to the single living genus, Rhinolophus, which has about 106 species, the extinct genus Palaeonycteris has been recognized. Horseshoe bats are closely related to the Old World leaf-nosed bats, family Hipposideridae, which have sometimes been included in Rhinolophidae. The horseshoe bats are divided into six subgenera and many species groups. The most recent common ancestor of all horseshoe bats lived 34–40 million years ago, though it is unclear where the geographic roots of the family are, and attempts to determine its biogeography have been indecisive. Their taxonomy is complex, as genetic evidence shows the likely existence of many cryptic species, as well as species recognized as distinct that may have little genetic divergence from previously recognized taxa. They are found in the Old World, mostly in tropical or subtropical areas, including Africa, Asia, Europe, and Oceania.
Horseshoe bats are considered small or medium-sized microbats, weighing , with forearm lengths of and combined lengths of head and body of . The fur, long and smooth in most species, can be reddish-brown, blackish, or bright orange-red. They get their common name from their large nose-leafs, which are shaped like horseshoes. The nose-leafs aid in echolocation; horseshoe bats have highly sophisticated echolocation, using constant frequency calls at high-duty cycles to detect prey in areas of high environmental clutters. They hunt insects and spiders, swooping down on prey from a perch, or gleaning from foliage. Little is known about their mating systems, but at least one species is monogamous, while another is polygynous. Gestation is approximately seven weeks and one offspring is produced at a time. A typical lifespan is six or seven years, but one greater horseshoe bat lived more than thirty years.
Horseshoe bats are relevant to humans in some regions as a source of disease, as food, and for traditional medicine. Several species are the natural reservoirs of various SARS-related coronaviruses, and data strongly suggests they are a reservoir of SARS-CoV, though humans may face more exposure risk from intermediate hosts such as masked palm civets.
They are hunted for food in several regions, particularly sub-Saharan Africa, but also Southeast Asia. Some species or their guano are used in traditional medicine in Nepal, India, Vietnam, and Senegal.
Taxonomy and evolution
Taxonomic history
Rhinolophus was first described as a genus in 1799 by French naturalist Bernard Germain de Lacépède. Initially, all extant horseshoe bats were in Rhinolophus, as well as the species now in Hipposideros (roundleaf bats). At first, Rhinolophus was within the family Vespertilionidae. In 1825, British zoologist John Edward Gray subdivided Vespertilionidae into subfamilies, including what he called Rhinolophina. English zoologist Thomas Bell is credited as the first to recognize horseshoe bats as a separate family, using Rhinolophidae in 1836. While Bell is sometimes recognized as the authority for Rhinolophidae, the authority is more often given as Gray, 1825. Horseshoe bats are in the superfamily Rhinolophoidea, along with Craseonycteridae, Hipposideridae Megadermatidae, Rhinonycteridae, and Rhinopomatidae.
Attempts were made to divide Rhinolophus into other genera. In 1816, English zoologist William Elford Leach proposed the genus name Phyllorhina; Gray proposed Aquias in 1847 and Phyllotis in 1866; and German naturalist Wilhelm Peters proposed Coelophyllus in 1867. In 1876, Irish zoologist George Edward Dobson returned all Asiatic horseshoe bats to Rhinolophus, additionally proposing the subfamilies Phyllorhininae (for the hipposiderids) and Rhinolophinae. American zoologist Gerrit Smith Miller Jr. further divided the hipposiderids from the horseshoe bats in 1907, recognizing Hipposideridae as a distinct family. Some authors have considered Hipposideros and associated genera as part of Rhinolophidae as recently as the early 2000s, though they are now most often recognized as a separate family. After the split into Rhinolophidae and Hipposideridae, further divisions were proposed for Rhinolophus, with Rhinolphyllotis in 1934 and Rhinomegalophus in 1951, though both additional genera were returned to Rhinolophus.
Danish mammalogist Knud Andersen was the first to propose species groups for Rhinolophus, doing so in 1905. Species groups are a way of clustering species to reflect evolutionary relationships. He recognized six species groups: R. simplex (now R. megaphyllus), R. lepidus, R. midas (now R. hipposideros), R. philippinensis, R. macrotis, and R. arcuatus. The species have been frequently rearranged among the groups as new groups are added, new species are described, and relationships among species are revised. Fifteen species groups were given by Csorba and colleagues in 2003. Various subgenera have been proposed as well, with six listed by Csorba et al. in 2003: Aquias, Phyllorhina, Rhinolophus, Indorhinolophus, Coelophyllus, and Rhinophyllotis. Informally, the rhinolophids can be split into two major clades: the mostly African clade, and the mostly Oriental clade.
Evolutionary history
The most recent common ancestor of Rhinolophus lived an estimated 34–40 million years ago, splitting from the hipposiderid lineage during the Eocene. Fossilized horseshoe bats are known from Europe (early to mid-Miocene, early Oligocene), Australia (Miocene), and Africa (Miocene and late Pliocene). The biogeography of horseshoe bats is poorly understood. Various studies have proposed that the family originated in Europe, Asia, or Africa. A 2010 study supported an Asian or Oriental origin of the family, with rapid evolutionary radiations of the African and Oriental clades during the Oligocene. A 2019 study found that R. xinanzhongguoensis and R. nippon, both Eurasian species, are more closely related to African species than to other Eurasian species, suggesting that rhinolophids may have a complex biogeographical relationship with Asia and the Afrotropics.
A 2016 study using mitochondrial and nuclear DNA placed the horseshoe bats within the Yinpterochiroptera as sister to Hipposideridae.
Rhinolophidae is represented by one extant genus, Rhinolophus. Both the family and the genus are confirmed as monophyletic (containing all descendants of a common ancestor). As of 2019, there were 106 described species in Rhinolophus, making it the second-most speciose genus of bat after Myotis. Rhinolophus may be undersampled in the Afrotropical realm, with one genetic study estimating that there could be up to twelve cryptic species in the region. Additionally, some taxa recognized as full species have been found to have little genetic divergence. Rhinolophus kahuzi may be a synonym for the Ruwenzori horseshoe bat (R. ruwenzorii), and R. gorongosae or R. rhodesiae may be synonyms of the Bushveld horseshoe bat (R. simulator). Additionally, Smithers's horseshoe bat (R. smithersi), Cohen's horseshoe bat (R. cohenae), and the Mount Mabu horseshoe bat (R. mabuensis) all have little genetic divergence from Hildebrandt's horseshoe bat (R. hildebrandtii). Recognizing the former three as full species leaves Hildebrandt's horseshoe bat paraphyletic.
The second genus in Rhinolophidae is the extinct Palaeonycteris, with the type species Palaeonycteris robustus. Palaeonycteris robustus lived during the Lower Miocene and its fossilized remains were found in Saint-Gérand-le-Puy, France.
Description
Appearance
Horseshoe bats are considered small or medium microbats. Individuals have a head and body length ranging and have forearm lengths of . One of the smaller species, the lesser horseshoe bat (R. hipposideros), weighs , while one of the larger species, the greater horseshoe bat (R. ferrumequinum), weighs . Fur color is highly variable among species, ranging from blackish to reddish brown to bright orange-red. The underparts are paler than the back fur. The majority of species have long, soft fur, but the woolly and lesser woolly horseshoe bats (R. luctus and R. beddomei) are unusual in their very long, woolly fur.
Like most bats, horseshoe bats have two mammary glands on their chests. Adult females additionally have two teat-like projections on their abdomens, called pubic nipples or false nipples, which are not connected to mammary glands. Only a few other bat families have pubic nipples, including Hipposideridae, Craseonycteridae, Megadermatidae, and Rhinopomatidae; they serve as attachment points for their offspring. In a few horseshoe bat species, males have a false nipple in each armpit.
Head and teeth
All horseshoe bats have large, leaf-like protuberances on their noses, which are called nose-leafs. The nose-leafs are important in species identification, and are composed of several parts. The front of the nose-leaf resembles and is called a horseshoe, earning them the common name of "horseshoe bats". The horseshoe is above the upper lip and is thin and flat. The lancet is triangular, pointed, and pocketed, and points up between the bats' eyes. The sella is a flat, ridge-like structure at the center of the nose. It rises from behind the nostrils and points out perpendicular from the head. Their ears are large and leaf-shaped, nearly as broad as they are long, and lack tragi. The antitragi of the ears are conspicuous. Their eyes are very small. The skull always has a rostral inflation, or bony protrusion on the snout. The typical dental formula of a horseshoe bat is , but the middle lower premolars are often missing, as well as the anterior upper premolars (premolars towards the front of the mouth). The young lose their milk teeth while still in utero, with the teeth resorbed into the body. They are born with the four permanent canine teeth erupted, which enables them to cling to their mothers. This is atypical among bat families, as most newborns have at least some milk teeth at birth, which are quickly replaced by the permanent set.
Postcrania
Several bones in its thorax are fused—the presternum, first rib, partial second rib, seventh cervical vertebra, first thoracic vertebra—making a solid ring. This fusion is associated with the ability to echolocate while stationary. Except for the first digit, which has two phalanges, all of their toes have three phalanges. This distinguishes them from hipposiderids, which have two phalanges in all toes. The tail is completely enclosed in the uropatagium (tail membrane), and the trailing edge of the uropatagium has calcars (cartilaginous spurs).
Biology and ecology
Echolocation and hearing
Horeshoe bats have very small eyes and their field of vision is limited by their large nose-leafs; thus, vision is unlikely to be a very important sense. Instead, they use echolocation to navigate, employing some of the most sophisticated echolocation of any bat group. To echolocate, they produce sound through their nostrils. While some bats use frequency-modulated echolocation, horseshoe bats use constant-frequency echolocation (also known as single-frequency echolocation). They have high duty cycles, meaning that when individuals are calling, they are producing sound more than 30% of the time. The use of high duty, constant-frequency echolocation aids in distinguishing prey items based on size. These echolocation characteristics are typical of bats that search for moving prey items in cluttered environments full of foliage. They echolocate at particularly high frequencies for bats, though not as high as hipposiderids relative to their body sizes, and the majority concentrate most of the echolocation energy into the second harmonic. The king horseshoe bat (R. rex) and the large-eared horseshoe bat (R. philippensis) are examples of outlier species that concentrate energy into the first harmonic rather than the second. Their highly furrowed nose-leafs likely assist in focusing the emission of sound, reducing the effect of environmental clutter. The nose-leaf in general acts like a parabolic reflector, aiming the produced sound while simultaneously shielding the ear from some of it.
Horseshoe bats have sophisticated senses of hearing due to their well-developed cochlea, and are able to detect Doppler-shifted echoes. This allows them to produce and receive sounds simultaneously. Within horseshoe bats, there is a negative relationship between ear length and echolocation frequency: Species with higher echolocation frequencies tend to have shorter ear lengths. During echolocation, the ears can move independently of each other in a "flickering" motion characteristic of the family, while the head simultaneously moves up and down or side to side.
Diet and foraging
Horseshoe bats are insectivorous, though consume other arthropods such as spiders, and employ two main foraging strategies. The first strategy is flying slow and low over the ground, hunting among trees and bushes. Some species who use this strategy are able to hover over prey and glean them from the substrate. The other strategy is known as perch feeding: Individuals roost on feeding perches and wait for prey to fly past, then fly out to capture it. Foraging usually occurs above the ground. While vesper bats may catch prey in their uropatagia and transfer it to their mouths, horseshoe bats do not use their uropatagia to catch prey. At least one species, the greater horseshoe bat, has been documented catching prey in the tip of its wing by bending the phalanges around it, then transferring it to its mouth. While a majority of horseshoe bats are nocturnal and hunt at night, Blyth's horseshoe bat (R. lepidus) is known to forage during the daytime on Tioman Island. This is hypothesized as a response to a lack of diurnal avian (day-active bird) predators on the island.
They have especially small and rounded wingtips, low wing loading (meaning they have large wings relative to body mass), and high camber. These factors give them increased agility, and they are capable of making quick, tight turns at slow speeds. Relative to all bats, horseshoe bat wingspans are typical for their body sizes, and their aspect ratios, which relate wingspan to wing area, are average or lower than average. Some species, like Rüppell's horseshoe bat (R. fumigatus), Hildebrandt's horseshoe bat, Lander's horseshoe bat (R. landeri), and Swinny's horseshoe bat (R. swinnyi), have particularly large total wing area, though most horseshoe bat species have average wing area.
Reproduction and life cycle
The mating systems of horseshoe bats are poorly understood. A review in 2000 noted that only about 4% of species had published information about their mating systems; along with the free-tailed bats (Molossidae), they had received the least attention of any bat family relative to their species diversity. At least one species, the greater horseshoe bat, appears to have a polygynous mating system where males attempt to establish and defend territories, attracting multiple females. Rhinolophus sedulus, however, is among the few species of bat that are believed to be monogamous (only 17 bat species are recognized as such as of 2000). Some species, particularly temperate species, have an annual breeding season in the fall, while other species mate in the spring. Many horseshoe bat species have the adaptation of delayed fertilization through female sperm storage. This is especially common in temperate species. In hibernating species, the sperm storage timing coincides with hibernation. Other species like Lander's horseshoe bat have embryonic diapause, meaning that while fertilization occurs directly following copulation, the zygote does not implant into the uterine wall for an extended period of time. The greater horseshoe bat has the adaptation of delayed embryonic development, meaning that growth of the embryo is conditionally delayed if the female enters torpor. This causes the interval between fertilization and birth to vary between two and three months. Gestation takes approximately seven weeks before a single offspring is born, called a pup. Individuals reach sexual maturity by age two. While lifespans typically do not exceed six or seven years, some individuals may have extraordinarily long lives. A greater horseshoe bat individual was once banded and then rediscovered thirty years later.
Behavior and social systems
Various levels of sociality are seen in horseshoe bats. Some species are solitary, with individuals roosting alone, while others are highly colonial, forming aggregations of thousands of individuals. The majority of species are moderately social. In some species, the sexes segregate annually when females form maternity colonies, though the sexes remain together all year in others. Individuals hunt solitarily. Because their hind limbs are poorly developed, they cannot scuttle on flat surfaces nor climb adeptly like other bats.
Horseshoe bats enter torpor to conserve energy. During torpor, their body temperature drops to as low as and their metabolic rates slow. Torpor is employed by horseshoe bats in temperate, sub-tropical, and tropical regions. Torpor has a short duration; when torpor is employed consistently for days, weeks, or months, it is known as hibernation. Hibernation is used by horseshoe bats in temperate regions during the winter months.
Predators and parasites
Overall, bats have few natural predators. Horseshoe bat predators include birds in the order Accipitriformes (hawks, eagles, and kites), as well as falcons and owls. Snakes may also prey on some species while they roost in caves, and domestic cats may hunt them as well. A 2019 study near a colony of bats in central Italy found that 30% of examined cat feces contained the remains of greater horseshoe bats.
Horseshoe bats have a variety of internal and external parasites. External parasites (ectoparasites) include mites in the genus Eyndhovenia, "bat flies" of the families Streblidae and Nycteribiidae, ticks of the genus Ixodes, and fleas of the genus Rhinolophopsylla. They are also affected by a variety of internal parasites (endoparasites), including trematodes of the genera Lecithodendrium, Plagiorchis, Prosthodendrium, and cestodes of the genus Potorolepsis.
Range and habitat
Horseshoe bats have a mostly Paleotropical distribution, though some species are in the southern Palearctic realm. They are found in the Old World, including Africa, Australia, Asia, Europe, and Oceania. The greater horseshoe bat has the greatest geographic range of any horseshoe bat, occurring across Europe, North Africa, Japan, China, and southern Asia. Other species are much more restricted, like the Andaman horseshoe bat (R. cognatus), which is only found on the Andaman Islands. They roost in a variety of places, including buildings, caves, tree hollows, and foliage. They occur in both forested and unforested habitat, with the majority of species occurring in tropical or subtropical areas. For the species that hibernate, they select caves with an ambient temperature of approximately .
Relationship to humans
As disease reservoirs
Coronaviruses
Horseshoe bats are of particular interest to public health and zoonosis as a source of coronaviruses.
Following the 2002–2004 SARS outbreak, several animal species were examined as possible natural reservoirs of the causative coronavirus, SARS-CoV. From 2003 to 2018, forty-seven SARS-related coronaviruses were detected in horseshoe bats. In 2019, a wet market in Wuhan, China, was linked to the outbreak of SARS-CoV-2. Genetic analyses of SARS-CoV-2 showed that it was highly similar to viruses found in horseshoe bats.
After the SARS outbreak, the least horseshoe bat (R. pusillus) was seropositive, the greater horseshoe bat tested positive for the virus only, and the big-eared horseshoe bat (R. macrotis), Chinese rufous horseshoe bat (R. sinicus), and Pearson's horseshoe bat (R. pearsoni) were both seropositive and tested positive for the virus. The bats' viruses were highly similar to SARS-CoV, with 88–92% similarity. Intraspecies diversity of SARS-like coronaviruses appears to have arisen in Rhinolophus sinicus by homologous recombination. R. sinicus likely harbored the direct ancestor of SARS-CoV in humans. Though horseshoe bats appeared to be the natural reservoir of SARS-related coronaviruses, humans likely became sick through contact with infected masked palm civets, which were identified as intermediate hosts of the virus.
During the period from 2003 to 2018, forty-seven SARS-related coronaviruses were detected in bats, forty-five in horseshoe bats. Thirty SARS-related coronaviruses were from Chinese rufous horseshoe bats, nine from greater horseshoe bats, two from big-eared horseshoe bats, two from the least horseshoe bat, and one each from the intermediate horseshoe bat (R. affinis), Blasius's horseshoe bat (R. blasii), Stoliczka's trident bat (Aselliscus stoliczkanus), and the wrinkle-lipped free-tailed bat (Chaerephon plicata).
In the market in Wuhan where the SARS-CoV-2 was detected, 96% had a similarity to a virus isolated from the intermediate horseshoe bat. Research on the evolutionary origins of SARS-CoV-2 indicates that bats were the natural reservoirs of SARS-CoV-2. It is yet unclear how the virus was transmitted to humans, though an intermediate host may have been involved. It was once believed to be the Sunda pangolin, but a July 2020 publication found no evidence of transmission from pangolins to humans.
Other viruses
They are also associated with viruses like orthoreoviruses, flaviviruses, and hantaviruses. They have tested positive for Mammalian orthoreovirus (MRV), including a type 1 MRV isolated from the lesser horseshoe bat and a type 2 MRV isolated from the least horseshoe bat. The specific MRVs found in horseshoe bats have not been linked to human infection, though humans can become ill through exposure to other MRVs. The rufous horseshoe bat (R. rouxii) has tested seropositive for Kyasanur Forest disease, which is a tick-borne viral hemorrhagic fever known from southern India. Kyasanur Forest disease is transmitted to humans through the bite of infected ticks, and has a mortality rate of 2–10%. Longquan virus, a kind of hantavirus, has been detected in the intermediate horseshoe bat, Chinese rufous horseshoe bat, and the little Japanese horseshoe bat (R. cornutus).
As food and medicine
Microbats are not hunted nearly as intensely as megabats: only 8% of insectivorous species are hunted for food, compared to half of all megabat species in the Old World tropics. Horseshoe bats are hunted for food, particularly in sub-Saharan Africa. Species hunted in Africa include the halcyon horseshoe bat (R. alcyone), Guinean horseshoe bat (R. guineensis), Hill's horseshoe bat (R. hilli), Hills' horseshoe bat (R. hillorum), Maclaud's horseshoe bat (R. maclaudi), the Ruwenzori horseshoe bat, the forest horseshoe bat (R. silvestris), and the Ziama horseshoe bat (R. ziama). In Southeast Asia, Marshall's horseshoe bat (R. marshalli) is consumed in Myanmar and the large rufous horseshoe bat (R. rufus) is consumed in the Philippines.
The Ao Naga people of Northeast India are reported to use the flesh of horseshoe bats to treat asthma. Ecological anthropologist Will Tuladhar-Douglas stated that the Newar people of Nepal "almost certainly" use horseshoe bats, among other species, to prepare Cikā Lāpa Wasa ("bat oil"). Dead bats are rolled up and placed in tightly sealed jars of mustard oil; the oil is ready when it gives off a distinct and unpleasant smell. Traditional medicinal uses of the bat oil include removing "earbugs", reported to be millipedes that crawl into one's ears and gnaw at the brain, possibly a traditional explanation of migraines. It is also used as a purported treatment for baldness and partial paralysis. In Senegal, there are anecdotal reports of horseshoe bats being used in potions to treat mental illness; in Vietnam, a pharmaceutical company reported using of horseshoe bat guano each year for medicinal uses.
Conservation
As of 2023, the IUCN had evaluated 94 species of horseshoe bat. They have the following IUCN statuses:
Critically endangered: 1 species (Hill's horseshoe bat)
Endangered: 13 species
Vulnerable: 5 species
Near threatened: 9 species
Least concern: 51 species
Data deficient: 15 species
Like all cave-roosting bats, cave-roosting horseshoe bats are vulnerable to disturbance of their cave habitats. Disturbance can include mining bat guano, quarrying limestone, and cave tourism.
| Biology and health sciences | Bats | Animals |
531611 | https://en.wikipedia.org/wiki/Foodborne%20illness | Foodborne illness | Foodborne illness (also known as foodborne disease and food poisoning) is any illness resulting from the contamination of food by pathogenic bacteria, viruses, or parasites, as well as prions (the agents of mad cow disease), and toxins such as aflatoxins in peanuts, poisonous mushrooms, and various species of beans that have not been boiled for at least 10 minutes.
Symptoms vary depending on the cause. They often include vomiting, fever, and aches, and may include diarrhea. Bouts of vomiting can be repeated with an extended delay in between. This is because even if infected food was eliminated from the stomach in the first bout, microbes, like bacteria (if applicable), can pass through the stomach into the intestine and begin to multiply. Some types of microbes stay in the intestine.
For contaminants requiring an incubation period, symptoms may not manifest for hours to days, depending on the cause and on the quantity of consumption. Longer incubation periods tend to cause those affected to not associate the symptoms with the item consumed, so they may misattribute the symptoms to gastroenteritis, for example.
Causes
Foodborne disease can be caused by a number of bacteria, such as Campylobacter jejuni, and chemicals, such as pesticides, medicines, and natural toxic substances, such as vomitoxin, poisonous mushrooms, or reef fish.
Foodborne illness usually arises from improper handling, preparation, or food storage. Good hygiene practices before, during, and after food preparation can reduce the chances of contracting an illness. There is a consensus in the public health community that regular hand-washing is one of the most effective defenses against the spread of foodborne illness. The action of monitoring food to ensure that it will not cause foodborne illness is known as food safety.
Bacteria
Bacteria are a common cause of foodborne illness. In 2000, the United Kingdom reported the individual bacteria involved as the following: Campylobacter jejuni 77.3%, Salmonella 20.9%, Escherichia coli O157:H7 1.4%, and all others less than 0.56%.
In the past, bacterial infections were thought to be more prevalent because few places had the capability to test for norovirus and no active surveillance was being done for this particular agent. Toxins from bacterial infections are delayed because the bacteria need time to multiply. As a result, symptoms associated with intoxication are usually not seen until 12–72 hours or more after eating contaminated food. However, in some cases, such as Staphylococcal food poisoning, the onset of illness can be as soon as 30 minutes after ingesting contaminated food.
A 2022 study concluded that washing uncooked chicken could increase the risk of pathogen transfer, and that specific washing conditions can decrease the risk of transfer.
Most common bacterial foodborne pathogens are:
Campylobacter jejuni which can lead to secondary Guillain–Barré syndrome and periodontitis
Clostridium perfringens, the "cafeteria germ"
Salmonella spp. – its S. typhimurium infection is caused by consumption of eggs or poultry that are not adequately cooked or by other interactive human-animal pathogens
Escherichia coli O157:H7 enterohemorrhagic (EHEC) which can cause hemolytic-uremic syndrome
Other common bacterial foodborne pathogens are:
Bacillus cereus
Escherichia coli, other virulence properties, such as enteroinvasive (EIEC), enteropathogenic (EPEC), enterotoxigenic (ETEC), enteroaggregative (EAEC or EAgEC)
Listeria monocytogenes
Shigella spp.
Staphylococcus aureus
Streptococcus
Vibrio cholerae, including O1 and non-O1
Vibrio parahaemolyticus
Vibrio vulnificus
Yersinia enterocolitica and Yersinia pseudotuberculosis
Less common bacterial agents:
Brucella spp.
Corynebacterium ulcerans
Coxiella burnetii or Q fever
Plesiomonas shigelloides
Enterotoxins
In addition to disease caused by direct bacterial infection, some foodborne illnesses are caused by enterotoxins (exotoxins targeting the intestines). Enterotoxins can produce illness even when the microbes that produced them have been killed. Symptom onset varies with the toxin but may be rapid in onset, as in the case of enterotoxins of Staphylococcus aureus in which symptoms appear in one to six hours. This causes intense vomiting including or not including diarrhea (resulting in staphylococcal enteritis), and staphylococcal enterotoxins (most commonly staphylococcal enterotoxin A but also including staphylococcal enterotoxin B) are the most commonly reported enterotoxins although cases of poisoning are likely underestimated. It occurs mainly in cooked and processed foods due to competition with other biota in raw foods, and humans are the main cause of contamination as a substantial percentage of humans are persistent carriers of S. aureus. The CDC has estimated about 240,000 cases per year in the United States.
Clostridium botulinum
Clostridium perfringens
Bacillus cereus
The rare but potentially deadly disease botulism occurs when the anaerobic bacterium Clostridium botulinum grows in improperly canned low-acid foods and produces botulin, a powerful paralytic toxin.
Pseudoalteromonas tetraodonis, certain species of Pseudomonas and Vibrio, and some other bacteria, produce the lethal tetrodotoxin, which is present in the tissues of some living animal species rather than being a product of decomposition.
Emerging foodborne pathogens
Aeromonas hydrophila, Aeromonas caviae, Aeromonas sobria
Scandinavian outbreaks of Yersinia enterocolitica have recently increased to an annual basis, connected to the non-canonical contamination of pre-washed salad.
Preventing bacterial food poisoning
Governments have the primary mandate of ensuring safe food for all, however all actors in the food chain are responsible to ensure only safe food reaches the consumer, thus preventing foodborne illnesses. This is achieved through the implementation of strict hygiene rules and a public veterinary and phytosanitary service that monitors animal products throughout the food chain, from farming to delivery in shops and restaurants. This regulation includes:
traceability: the origin of the ingredients (farm of origin, identification of the crop or animal) and where and when it has been processed must be known in the final product; in this way, the origin of the disease can be traced and resolved (and possibly penalized), and the final products can be removed from sale if a problem is detected;
enforcement of hygiene procedures such as HACCP and the "cold chain";
power of control and of law enforcement of veterinarians.
In August 2006, the United States Food and Drug Administration approved phage therapy which involves spraying meat with viruses that infect bacteria, and thus preventing infection. This has raised concerns because without mandatory labeling, consumers would not know that meat and poultry products have been treated with the spray.
At home, prevention mainly consists of good food safety practices. Many forms of bacterial poisoning can be prevented by cooking food sufficiently, and either eating it quickly or refrigerating it effectively. Many toxins, however, are not destroyed by heat treatment.
Techniques that help prevent food borne illness in the kitchen are hand washing, rinsing produce, preventing cross-contamination, proper storage, and maintaining cooking temperatures. In general, freezing or refrigerating prevents virtually all bacteria from growing, and heating food sufficiently kills parasites, viruses, and most bacteria. Bacteria grow most rapidly at the range of temperatures between , called the "danger zone". Storing food below or above the "danger zone" can effectively limit the production of toxins. For storing leftovers, the food must be put in shallow containers
for quick cooling and must be refrigerated within two hours. When food is reheated, it must reach an internal temperature of or until hot or steaming to kill bacteria.
Mycotoxins and alimentary mycotoxicoses
The term alimentary mycotoxicosis refers to the effect of poisoning by mycotoxins through food consumption. The term mycotoxin is usually reserved for the toxic chemical compounds naturally produced by fungi that readily colonize crops under given temperature and moisture conditions. Mycotoxins can have important effects on human and animal health. For example, an outbreak which occurred in the UK during 1960 caused the death of 100,000 turkeys which had consumed aflatoxin-contaminated peanut meal. In the USSR in World War II, 5,000 people died due to alimentary toxic aleukia (ALA). In Kenya, mycotoxins led to the death of 125 people in 2004, after consumption of contaminated grains. In animals, mycotoxicosis targets organ systems such as liver and digestive system. Other effects can include reduced productivity and suppression of the immune system, thus pre-disposing the animals to other secondary infections.
The common foodborne Mycotoxins include:
Aflatoxins – originating from Aspergillus parasiticus and Aspergillus flavus. They are frequently found in tree nuts, peanuts, maize, sorghum and other oilseeds, including corn and cottonseeds. The pronounced forms of aflatoxins are those of B1, B2, G1, and G2, amongst which Aflatoxin B1 predominantly targets the liver, which will result in necrosis, cirrhosis, and carcinoma. Other forms of aflatoxins exist as metabolites such as Aflatoxin M1. In the US, the acceptable level of total aflatoxins in foods is less than 20 μg/kg, except for Aflatoxin M1 in milk, which should be less than 0.5 μg/kg The official document can be found at FDA's website. The European union has more stringent standards, set at 10 μg/kg in cereals and cereal products. These references are also adopted in other countries.
Altertoxins – are those of alternariol (AOH), alternariol methyl ether (AME), altenuene (ALT), altertoxin-1 (ATX-1), tenuazonic acid (TeA), and radicinin (RAD), originating from Alternaria spp. Some of the toxins can be present in sorghum, ragi, wheat and tomatoes. Some research has shown that the toxins can be easily cross-contaminated between grain commodities, suggesting that manufacturing and storage of grain commodities is a critical practice.
Citrinin
Citreoviridin
Cyclopiazonic acid
Cytochalasins
Ergot alkaloids / ergopeptine alkaloids – ergotamine
Fumonisins – Crop corn can be easily contaminated by the fungi Fusarium moniliforme, and its fumonisin B1 will cause leukoencephalomalacia (LEM) in horses, pulmonary edema syndrome (PES) in pigs, liver cancer in rats and esophageal cancer in humans. For human and animal health, both the FDA and the EC have regulated the content levels of toxins in food and animal feed.
Fusaric acid
Fusarochromanone
Kojic acid
Lolitrem alkaloids
Moniliformin
3-Nitropropionic acid
Nivalenol
Ochratoxins – In Australia, The Limit of Reporting (LOR) level for ochratoxin A (OTA) analyses in 20th Australian Total Diet Survey was 1 μg/kg, whereas the EC restricts the content of OTA to 5 μg/kg in cereal commodities, 3 μg/kg in processed products and 10 μg/kg in dried vine fruits.
Oosporeine
Patulin – Currently, this toxin has been advisably regulated on fruit products. The EC and the FDA have limited it to under 50 μg/kg for fruit juice and fruit nectar, while limits of 25 μg/kg for solid-contained fruit products and 10 μg/kg for baby foods were specified by the EC.
Phomopsins
Sporidesmin A
Sterigmatocystin
Tremorgenic mycotoxins – Five of them have been reported to be associated with molds found in fermented meats. These are fumitremorgen B, paxilline, penitrem A, verrucosidin, and verruculogen.
Trichothecenes – sourced from Cephalosporium, Fusarium, Myrothecium, Stachybotrys, and Trichoderma. The toxins are usually found in molded maize, wheat, corn, peanuts and rice, or animal feed of hay and straw. Four trichothecenes, T-2 toxin, HT-2 toxin, diacetoxyscirpenol (DAS), and deoxynivalenol (DON) have been most commonly encountered by humans and animals. The consequences of oral intake of, or dermal exposure to, the toxins will result in alimentary toxic aleukia, neutropenia, aplastic anemia, thrombocytopenia and/or skin irritation. In 1993, the FDA issued a document for the content limits of DON in food and animal feed at an advisory level. In 2003, US published a patent that is very promising for farmers to produce a trichothecene-resistant crop.
Zearalenone
Zearalenols
Viruses
Viral infections make up perhaps one third of cases of food poisoning in developed countries. In the US, more than 50% of cases are viral and noroviruses are the most common foodborne illness, causing 57% of outbreaks in 2004. Foodborne viral infection are usually of intermediate (1–3 days) incubation period, causing illnesses which are self-limited in otherwise healthy individuals; they are similar to the bacterial forms described above.
Enterovirus
Hepatitis A is distinguished from other viral causes by its prolonged (2–6 week) incubation period and its ability to spread beyond the stomach and intestines into the liver. It often results in jaundice, or yellowing of the skin, but rarely leads to chronic liver dysfunction. The virus has been found to cause infection due to the consumption of fresh-cut produce which has fecal contamination.
Hepatitis E
Norovirus
Rotavirus
Parasites
Most foodborne parasites are zoonoses.
Platyhelminthes:
Diphyllobothrium sp.
Nanophyetus sp.
Taenia saginata
Taenia solium
Fasciola hepatica
| Biology and health sciences | Infectious disease | null |
531911 | https://en.wikipedia.org/wiki/Laser%20cutting | Laser cutting | Laser cutting is a technology that uses a laser to vaporize materials, resulting in a cut edge. While typically used for industrial manufacturing applications, it is now used by schools, small businesses, architecture, and hobbyists. Laser cutting works by directing the output of a high-power laser most commonly through optics. The laser optics and CNC (computer numerical control) are used to direct the laser beam to the material. A commercial laser for cutting materials uses a motion control system to follow a CNC or G-code of the pattern to be cut onto the material. The focused laser beam is directed at the material, which then either melts, burns, vaporizes away, or is blown away by a jet of gas, leaving an edge with a high-quality surface finish.
History
In 1965, the first production laser cutting machine was used to drill holes in diamond dies. This machine was made by the Western Electric Engineering Research Center. In 1967, the British pioneered laser-assisted oxygen jet cutting for metals. In the early 1970s, this technology was put into production to cut titanium for aerospace applications. At the same time, CO2 lasers were adapted to cut non-metals, such as textiles, because, at the time, CO2 lasers were not powerful enough to overcome the thermal conductivity of metals.
Process
The laser beam is generally focused using a high-quality lens on the work zone. The quality of the beam has a direct impact on the focused spot size. The narrowest part of the focused beam is generally less than in diameter. Depending upon the material thickness, kerf widths as small as are possible. In order to be able to start cutting from somewhere other than the edge, a pierce is done before every cut. Piercing usually involves a high-power pulsed laser beam which slowly makes a hole in the material, taking around 5–15 seconds for stainless steel, for example.
The parallel rays of coherent light from the laser source often fall in the range between in diameter. This beam is normally focused and intensified by a lens or a mirror to a very small spot of about to create a very intense laser beam. In order to achieve the smoothest possible finish during contour cutting, the direction of the beam polarization must be rotated as it goes around the periphery of a contoured workpiece. For sheet metal cutting, the focal length is usually .
Advantages of laser cutting over mechanical cutting include easier work holding and reduced contamination of workpiece (since there is no cutting edge which can become contaminated by the material or contaminate the material). Precision may be better since the laser beam does not wear during the process. There is also a reduced chance of warping the material that is being cut, as laser systems have a small heat-affected zone. Some materials are also very difficult or impossible to cut by more traditional means.
Laser cutting for metals has the advantage over plasma cutting of being more precise and using less energy when cutting sheet metal; however, most industrial lasers cannot cut through the greater metal thickness that plasma can. Newer laser machines operating at higher power (6000 watts, as contrasted with early laser cutting machines' 1500-watt ratings) are approaching plasma machines in their ability to cut through thick materials, but the capital cost of such machines is much higher than that of plasma cutting machines capable of cutting thick materials like steel plate.
Types
There are three main types of lasers used in laser cutting. The laser is suited for cutting, boring, and engraving. The neodymium (Nd) and neodymium yttrium-aluminium-garnet (Nd:YAG) lasers are identical in style and differ only in the application. Nd is used for boring and where high energy but low repetition are required. The Nd:YAG laser is used where very high power is needed and for boring and engraving. Both and Nd/Nd:YAG lasers can be used for welding.
lasers are commonly "pumped" by passing a current through the gas mix (DC-excited) or using radio frequency energy (RF-excited). The RF method is newer and has become more popular. Since DC designs require electrodes inside the cavity, they can encounter electrode erosion and plating of electrode material on glassware and optics. Since RF resonators have external electrodes they are not prone to those problems.
lasers are used for the industrial cutting of many materials including titanium, stainless steel, mild steel, aluminium, plastic, wood, engineered wood, wax, fabrics, and paper. YAG lasers are primarily used for cutting and scribing metals and ceramics.
In addition to the power source, the type of gas flow can affect performance as well. Common variants of lasers include fast axial flow, slow axial flow, transverse flow, and slab. In a fast axial flow resonator, the mixture of carbon dioxide, helium, and nitrogen is circulated at high velocity by a turbine or blower. Transverse flow lasers circulate the gas mix at a lower velocity, requiring a simpler blower. Slab or diffusion-cooled resonators have a static gas field that requires no pressurization or glassware, leading to savings on replacement turbines and glassware.
The laser generator and external optics (including the focus lens) require cooling. Depending on system size and configuration, waste heat may be transferred by a coolant or directly to air. Water is a commonly used coolant, usually circulated through a chiller or heat transfer system.
A laser microjet is a water-jet-guided laser in which a pulsed laser beam is coupled into a low-pressure water jet. This is used to perform laser cutting functions while using the water jet to guide the laser beam, much like an optical fiber, through total internal reflection. The advantages of this are that the water also removes debris and cools the material. Additional advantages over traditional "dry" laser cutting are high dicing speeds, parallel kerf, and omnidirectional cutting.
Fiber lasers are a type of solid-state laser that is rapidly growing within the metal cutting industry. Unlike CO2, Fiber technology utilizes a solid gain medium, as opposed to a gas or liquid. The “seed laser” produces the laser beam and is then amplified within a glass fiber. With a wavelength of only 1064 nanometers fiber lasers produce an extremely small spot size (up to 100 times smaller compared to the CO2) making it ideal for cutting reflective metal material. This is one of the main advantages of Fiber compared to CO2.
Fibre laser cutter benefits include:
Rapid processing times.
Reduced energy consumption & bills – due to greater efficiency.
Greater reliability and performance - no optics to adjust or align and no lamps to replace.
Minimal maintenance.
The ability to process highly reflective materials such as copper and brass.
Higher productivity - lower operational costs offer a greater return on your investment.
Methods
There are many different methods of cutting using lasers, with different types used to cut different materials. Some of the methods are vaporization, melt and blow, melt blow and burn, thermal stress cracking, scribing, cold cutting, and burning stabilized laser cutting.
Vaporization cutting
In vaporization cutting, the focused beam heats the surface of the material to a flashpoint and generates a keyhole. The keyhole leads to a sudden increase in absorptivity quickly deepening the hole. As the hole deepens and the material boils, vapor generated erodes the molten walls blowing ejection out and further enlarging the hole. Nonmelting materials such as wood, carbon, and thermoset plastics are usually cut by this method.
Melt and blow
Melt and blow or fusion cutting uses high-pressure gas to blow molten material from the cutting area, greatly decreasing the power requirement. First, the material is heated to melting point then a gas jet blows the molten material out of the kerf avoiding the need to raise the temperature of the material any further. Materials cut with this process are usually metals.
Thermal stress cracking
Brittle materials are particularly sensitive to thermal fracture, a feature exploited in thermal stress cracking. A beam is focused on the surface causing localized heating and thermal expansion. This results in a crack that can then be guided by moving the beam. The crack can be moved in order of m/s. It is usually used in the cutting of glass.
Stealth dicing of silicon wafers
The separation of microelectronic chips as prepared in semiconductor device fabrication from silicon wafers may be performed by the so-called stealth dicing process, which operates with a pulsed Nd:YAG laser, the wavelength of which (1064 nm) is well adapted to the electronic band gap of silicon (1.11 eV or 1117 nm).
Reactive cutting
Reactive cutting is also called "burning stabilized laser gas cutting" and "flame cutting". Reactive cutting is like oxygen torch cutting but with a laser beam as the ignition source. Mostly used for cutting carbon steel in thicknesses over 1 mm. This process can be used to cut very thick steel plates with relatively little laser power.
Tolerances and surface finish
Laser cutters have a positioning accuracy of 10 micrometers and repeatability of 5 micrometers.
Standard roughness Rz increases with the sheet thickness, but decreases with laser power and cutting speed. When cutting low carbon steel with laser power of 800 W, standard roughness Rz is 10 μm for sheet thickness of 1 mm, 20 μm for 3 mm, and 25 μm for 6 mm.
Where: steel sheet thickness in mm; laser power in kW (some new laser cutters have laser power of 4 kW); cutting speed in meters per minute.
This process is capable of holding quite close tolerances, often to within 0.001 inch (0.025 mm). Part geometry and the mechanical soundness of the machine have much to do with tolerance capabilities. The typical surface finish resulting from laser beam cutting may range from 125 to 250 micro-inches (0.003 mm to 0.006 mm).
Machine configurations
There are generally three different configurations of industrial laser cutting machines: moving material, hybrid, and flying optics systems. These refer to the way that the laser beam is moved over the material to be cut or processed. For all of these, the axes of motion are typically designated X and Y axis. If the cutting head may be controlled, it is designated as the Z-axis.
Moving material lasers have a stationary cutting head and move the material under it. This method provides a constant distance from the laser generator to the workpiece and a single point from which to remove cutting effluent. It requires fewer optics but requires moving the workpiece. This style of machine tends to have the fewest beam delivery optics but also tends to be the slowest.
Hybrid lasers provide a table that moves in one axis (usually the X-axis) and moves the head along the shorter (Y) axis. This results in a more constant beam delivery path length than a flying optic machine and may permit a simpler beam delivery system. This can result in reduced power loss in the delivery system and more capacity per watt than flying optics machines.
Flying optics lasers feature a stationary table and a cutting head (with a laser beam) that moves over the workpiece in both of the horizontal dimensions. Flying optics cutters keep the workpiece stationary during processing and often do not require material clamping. The moving mass is constant, so dynamics are not affected by varying the size of the workpiece. Flying optics machines are the fastest type, which is advantageous when cutting thinner workpieces.
Flying optic machines must use some method to take into account the changing beam length from the near field (close to the resonator) cutting to the far field (far away from the resonator) cutting. Common methods for controlling this include collimation, adaptive optics, or the use of a constant beam length axis.
Five and six-axis machines also permit cutting formed workpieces. In addition, there are various methods of orienting the laser beam to a shaped workpiece, maintaining a proper focus distance and nozzle standoff.
Pulsing
Pulsed lasers which provide a high-power burst of energy for a short period are very effective in some laser cutting processes, particularly for piercing, or when very small holes or very low cutting speeds are required, since if a constant laser beam were used, the heat could reach the point of melting the whole piece being cut.
Most industrial lasers have the ability to pulse or cut CW (continuous wave) under NC (numerical control) program control.
Double pulse lasers use a series of pulse pairs to improve material removal rate and hole quality. Essentially, the first pulse removes material from the surface and the second prevents the ejecta from adhering to the side of the hole or cut.
Power consumption
The main disadvantage of laser cutting is the high power consumption. Industrial laser efficiency may range from 5% to 45%. The power consumption and efficiency of any particular laser will vary depending on output power and operating parameters. This will depend on the type of laser and how well the laser is matched to the work at hand. The amount of laser cutting power required, known as heat input, for a particular job depends on the material type, thickness, process (reactive/inert) used, and desired cutting rate.
Production and cutting rates
The maximum cutting rate (production rate) is limited by a number of factors including laser power, material thickness, process type (reactive or inert), and material properties. Common industrial systems (≥1 kW) will cut carbon steel metal from in thickness. For many purposes, a laser can be up to thirty times faster than standard sawing.
| Technology | Metallurgy | null |
532175 | https://en.wikipedia.org/wiki/Transfer%20RNA | Transfer RNA | Transfer RNA (abbreviated tRNA and formerly referred to as sRNA, for soluble RNA) is an adaptor molecule composed of RNA, typically 76 to 90 nucleotides in length (in eukaryotes). In a cell, it provides the physical link between the genetic code in messenger RNA (mRNA) and the amino acid sequence of proteins, carrying the correct sequence of amino acids to be combined by the protein-synthesizing machinery, the ribosome. Each three-nucleotide codon in mRNA is complemented by a three-nucleotide anticodon in tRNA. As such, tRNAs are a necessary component of translation, the biological synthesis of new proteins in accordance with the genetic code.
Overview
The process of translation starts with the information stored in the nucleotide sequence of DNA. This is first transformed into mRNA, then tRNA specifies which three-nucleotide codon from the genetic code corresponds to which amino acid. Each mRNA codon is recognized by a particular type of tRNA, which docks to it along a three-nucleotide anticodon, and together they form three complementary base pairs.
On the other end of the tRNA is a covalent attachment to the amino acid corresponding to the anticodon sequence, with each type of tRNA attaching to a specific amino acid. Because the genetic code contains multiple codons that specify the same amino acid, there are several tRNA molecules bearing different anticodons which carry the same amino acid.
The covalent attachment to the tRNA 3' end is catalysed by enzymes called aminoacyl tRNA synthetases. During protein synthesis, tRNAs with attached amino acids are delivered to the ribosome by proteins called elongation factors, which aid in association of the tRNA with the ribosome, synthesis of the new polypeptide, and translocation (movement) of the ribosome along the mRNA. If the tRNA's anticodon matches the mRNA, another tRNA already bound to the ribosome transfers the growing polypeptide chain from its 3' end to the amino acid attached to the 3' end of the newly delivered tRNA, a reaction catalysed by the ribosome. A large number of the individual nucleotides in a tRNA molecule may be chemically modified, often by methylation or deamidation. These unusual bases sometimes affect the tRNA's interaction with ribosomes and sometimes occur in the anticodon to alter base-pairing properties.
Structure
The structure of tRNA can be decomposed into its primary structure, its secondary structure (usually visualized as the cloverleaf structure), and its tertiary structure (all tRNAs have a similar L-shaped 3D structure that allows them to fit into the P and A sites of the ribosome). The cloverleaf structure becomes the 3D L-shaped structure through coaxial stacking of the helices, which is a common RNA tertiary structure motif. The lengths of each arm, as well as the loop 'diameter', in a tRNA molecule vary from species to species.
The tRNA structure consists of the following:
The acceptor stem is a 7- to 9-base pair (bp) stem made by the base pairing of the 5′-terminal nucleotide with the 3′-terminal nucleotide (which contains the CCA tail used to attach the amino acid). The acceptor stem may contain non-Watson-Crick base pairs.
The CCA tail is a cytosine-cytosine-adenine sequence at the 3′ end of the tRNA molecule. The amino acid loaded onto the tRNA by aminoacyl tRNA synthetases, to form aminoacyl-tRNA, is covalently bonded to the 3′-hydroxyl group on the CCA tail. This sequence is important for the recognition of tRNA by enzymes and critical in translation. In prokaryotes, the CCA sequence is transcribed in some tRNA sequences. In most prokaryotic tRNAs and eukaryotic tRNAs, the CCA sequence is added during processing and therefore does not appear in the tRNA gene.
The D loop is a 4- to 6-bp stem ending in a loop that often contains dihydrouridine.
The anticodon loop is a 5-bp stem whose loop contains the anticodon.
The TΨC loop is named so because of the characteristic presence of the unusual base Ψ in the loop, where Ψ is pseudouridine, a modified uridine. The modified base is often found within the sequence 5'-TΨCGA-3', with the T (ribothymidine, m5U) and A forming a base pair.
The variable loop or V loop sits between the anticodon loop and the ΨU loop and, as its name implies, varies in size from 3 to 21 bases. In some tRNAs, the "loop" is long enough to form a rigid stem, the variable arm. tRNAs with a V loop more than 10 bases long is classified as "class II" and the rest is called "class I".
Anticodon
An anticodon is a unit of three nucleotides corresponding to the three bases of an mRNA codon. Each tRNA has a distinct anticodon triplet sequence that can form 3 complementary base pairs to one or more codons for an amino acid. Some anticodons pair with more than one codon due to wobble base pairing. Frequently, the first nucleotide of the anticodon is one not found on mRNA: inosine, which can hydrogen bond to more than one base in the corresponding codon position. In genetic code, it is common for a single amino acid to be specified by all four third-position possibilities, or at least by both pyrimidines and purines; for example, the amino acid glycine is coded for by the codon sequences GGU, GGC, GGA, and GGG. Other modified nucleotides may also appear at the first anticodon position—sometimes known as the "wobble position"—resulting in subtle changes to the genetic code, as for example in mitochondria. The possibility of wobble bases reduces the number of tRNA types required: instead of 61 types with one for each sense codon of the standard genetic code), only 31 tRNAs are required to translate, unambiguously, all 61 sense codons.
Nomenclature
A tRNA is commonly named by its intended amino acid (e.g. ), by its anticodon sequence (e.g. ), or by both (e.g. or ). These two features describe the main function of the tRNA, but do not actually cover the whole diversity of tRNA variation; as a result, numerical suffixes are added to differentiate. tRNAs intended for the same amino acid are called "isotypes"; these with the same anticodon sequence are called "isoacceptors"; and these with both being the same but differing in other places are called "isodecoders".
Aminoacylation
Aminoacylation is the process of adding an aminoacyl group to a compound. It covalently links an amino acid to the CCA 3′ end of a tRNA molecule.
Each tRNA is aminoacylated (or charged) with a specific amino acid by an aminoacyl tRNA synthetase. There is normally a single aminoacyl tRNA synthetase for each amino acid, despite the fact that there can be more than one tRNA, and more than one anticodon for an amino acid. Recognition of the appropriate tRNA by the synthetases is not mediated solely by the anticodon, and the acceptor stem often plays a prominent role.
Reaction:
amino acid + ATP → aminoacyl-AMP + PPi
aminoacyl-AMP + tRNA → aminoacyl-tRNA + AMP
Certain organisms can have one or more aminophosphate-tRNA synthetases missing. This leads to charging of the tRNA by a chemically related amino acid, and by use of an enzyme or enzymes, the tRNA is modified to be correctly charged. For example, Helicobacter pylori has glutaminyl tRNA synthetase missing. Thus, glutamate tRNA synthetase charges tRNA-glutamine(tRNA-Gln) with glutamate. An amidotransferase then converts the acid side chain of the glutamate to the amide, forming the correctly charged gln-tRNA-Gln.
Binding to ribosome
The ribosome has three binding sites for tRNA molecules that span the space between the two ribosomal subunits: the A (aminoacyl), P (peptidyl), and E (exit) sites. In addition, the ribosome has two other sites for tRNA binding that are used during mRNA decoding or during the initiation of protein synthesis. These are the T site (named elongation factor Tu) and I site (initiation). By convention, the tRNA binding sites are denoted with the site on the small ribosomal subunit listed first and the site on the large ribosomal subunit listed second. For example, the A site is often written A/A, the P site, P/P, and the E site, E/E. The binding proteins like L27, L2, L14, L15, L16 at the A- and P- sites have been determined by affinity labeling by A. P. Czernilofsky et al. (Proc. Natl. Acad. Sci, USA, pp. 230–234, 1974).
Once translation initiation is complete, the first aminoacyl tRNA is located in the P/P site, ready for the elongation cycle described below. During translation elongation, tRNA first binds to the ribosome as part of a complex with elongation factor Tu (EF-Tu) or its eukaryotic (eEF-1) or archaeal counterpart. This initial tRNA binding site is called the A/T site. In the A/T site, the A-site half resides in the small ribosomal subunit where the mRNA decoding site is located. The mRNA decoding site is where the mRNA codon is read out during translation. The T-site half resides mainly on the large ribosomal subunit where EF-Tu or eEF-1 interacts with the ribosome. Once mRNA decoding is complete, the aminoacyl-tRNA is bound in the A/A site and is ready for the next peptide bond to be formed to its attached amino acid. The peptidyl-tRNA, which transfers the growing polypeptide to the aminoacyl-tRNA bound in the A/A site, is bound in the P/P site. Once the peptide bond is formed, the tRNA in the P/P site is acylated, or has a free 3' end, and the tRNA in the A/A site dissociates the growing polypeptide chain. To allow for the next elongation cycle, the tRNAs then move through hybrid A/P and P/E binding sites, before completing the cycle and residing in the P/P and E/E sites. Once the A/A and P/P tRNAs have moved to the P/P and E/E sites, the mRNA has also moved over by one codon and the A/T site is vacant, ready for the next round of mRNA decoding. The tRNA bound in the E/E site then leaves the ribosome.
The P/I site is actually the first to bind to aminoacyl tRNA, which is delivered by an initiation factor called IF2 in bacteria. However, the existence of the P/I site in eukaryotic or archaeal ribosomes has not yet been confirmed. The P-site protein L27 has been determined by affinity labeling by E. Collatz and A. P. Czernilofsky (FEBS Lett., Vol. 63, pp. 283–286, 1976).
tRNA genes
Organisms vary in the number of tRNA genes in their genome. For example, the nematode worm C. elegans, a commonly used model organism in genetics studies, has 29,647 genes in its nuclear genome, of which 620 code for tRNA. The budding yeast Saccharomyces cerevisiae has 275 tRNA genes in its genome. The number of tRNA genes per genome can vary widely, with bacterial species from groups such as Fusobacteria and Tenericutes having around 30 genes per genome while complex eukaryotic genomes such as the zebrafish (Danio rerio) can bear more than 10 thousand tRNA genes.
In the human genome, which, according to January 2013 estimates, has about 20,848 protein coding genes in total, there are 497 nuclear genes encoding cytoplasmic tRNA molecules, and 324 tRNA-derived pseudogenes—tRNA genes thought to be no longer functional (although pseudo tRNAs have been shown to be involved in antibiotic resistance in bacteria). As with all eukaryotes, there are 22 mitochondrial tRNA genes in humans. Mutations in some of these genes have been associated with severe diseases like the MELAS syndrome. Regions in nuclear chromosomes, very similar in sequence to mitochondrial tRNA genes, have also been identified (tRNA-lookalikes). These tRNA-lookalikes are also considered part of the nuclear mitochondrial DNA (genes transferred from the mitochondria to the nucleus). The phenomenon of multiple nuclear copies of mitochondrial tRNA (tRNA-lookalikes) has been observed in many higher organisms from human to the opossum suggesting the possibility that the lookalikes are functional.
Cytoplasmic tRNA genes can be grouped into 49 families according to their anticodon features. These genes are found on all chromosomes, except the 22 and Y chromosome. High clustering on 6p is observed (140 tRNA genes), as well as on chromosome 1.
The HGNC, in collaboration with the Genomic tRNA Database (GtRNAdb) and experts in the field, has approved unique names for human genes that encode tRNAs.
Typically, tRNAs genes from Bacteria are shorter (mean = 77.6 bp) than tRNAs from Archaea (mean = 83.1 bp) and eukaryotes (mean = 84.7 bp). The mature tRNA follows an opposite pattern with tRNAs from Bacteria being usually longer (median = 77.6 nt) than tRNAs from Archaea (median = 76.8 nt), with eukaryotes exhibiting the shortest mature tRNAs (median = 74.5 nt).
Evolution
Genomic tRNA content is a differentiating feature of genomes among biological domains of life: Archaea present the simplest situation in terms of genomic tRNA content with a uniform number of gene copies, Bacteria have an intermediate situation and Eukarya present the most complex situation. Eukarya present not only more tRNA gene content than the other two kingdoms but also a high variation in gene copy number among different isoacceptors, and this complexity seem to be due to duplications of tRNA genes and changes in anticodon specificity .
Evolution of the tRNA gene copy number across different species has been linked to the appearance of specific tRNA modification enzymes (uridine methyltransferases in Bacteria, and adenosine deaminases in Eukarya), which increase the decoding capacity of a given tRNA. As an example, tRNAAla encodes four different tRNA isoacceptors (AGC, UGC, GGC and CGC). In Eukarya, AGC isoacceptors are extremely enriched in gene copy number in comparison to the rest of isoacceptors, and this has been correlated with its A-to-I modification of its wobble base. This same trend has been shown for most amino acids of eukaryal species. Indeed, the effect of these two tRNA modifications is also seen in codon usage bias. Highly expressed genes seem to be enriched in codons that are exclusively using codons that will be decoded by these modified tRNAs, which suggests a possible role of these codons—and consequently of these tRNA modifications—in translation efficiency.
Many species have lost specific tRNAs during evolution. For instance, both mammals and birds lack the same 14 out of the possible 64 tRNA genes, but other life forms contain these tRNAs. For translating codons for which an exactly pairing tRNA is missing, organisms resort to a strategy called wobbling, in which imperfectly matched tRNA/mRNA pairs still give rise to translation, although this strategy also increases the propensity for translation errors. The reasons why tRNA genes have been lost during evolution remains under debate but may relate improving resistance to viral infection. Because nucleotide triplets can present more combinations than there are amino acids and associated tRNAs, there is redundancy in the genetic code, and several different 3-nucleotide codons can express the same amino acid. This codon bias is what necessitates codon optimization.
Hypothetical origin
The top half of tRNA (consisting of the T arm and the acceptor stem with 5′-terminal phosphate group and 3′-terminal CCA group) and the bottom half (consisting of the D arm and the anticodon arm) are independent units in structure as well as in function. The top half may have evolved first including the 3′-terminal genomic tag which originally may have marked tRNA-like molecules for replication in early RNA world. The bottom half may have evolved later as an expansion, e.g. as protein synthesis started in RNA world and turned it into a ribonucleoprotein world (RNP world). This proposed scenario is called genomic tag hypothesis. In fact, tRNA and tRNA-like aggregates have an important catalytic influence (i.e., as ribozymes) on replication still today. These roles may be regarded as 'molecular (or chemical) fossils' of RNA world. In March 2021, researchers reported evidence suggesting that an early form of transfer RNA could have been a replicator ribozyme molecule in the very early development of life, or abiogenesis.
Evolution of type I and type II tRNAs is explained to the last nucleotide by the three 31 nucleotide minihelix tRNA evolution theorem, which also describes the pre-life to life transition on Earth. Three 31 nucleotide minihelices of known sequence were ligated in pre-life to generate a 93 nucleotide tRNA precursor. In pre-life, a 31 nucleotide D loop minihelix (GCGGCGGUAGCCUAGCCUAGCCUACCGCCGC) was ligated to two 31 nucleotide anticodon loop minihelices (GCGGCGGCCGGGCU/???AACCCGGCCGCCGC; / indicates a U-turn conformation in the RNA backbone; ? indicates unknown base identity) to form the 93 nucleotide tRNA precursor. To generate type II tRNAs, a single internal 9 nucleotide deletion occurred within ligated acceptor stems (CCGCCGCGCGGCGG goes to GGCGG). To generate type I tRNAs, an additional, related 9 nucleotide deletion occurred within ligated acceptor stems within the variable loop region (CCGCCGCGCGGCGG goes to CCGCC). These two 9 nucleotide deletions are identical on complementary RNA strands. tRNAomes (all of the tRNAs of an organism) were generated by duplication and mutation.
Very clearly, life evolved from a polymer world that included RNA repeats and RNA inverted repeats (stem-loop-stems). Of particular importance were the 7 nucleotide U-turn loops (CU/???AA). After LUCA (the last universal common (cellular) ancestor), the T loop evolved to interact with the D loop at the tRNA “elbow” (T loop: UU/CAAAU, after LUCA). Polymer world progressed to minihelix world to tRNA world, which has endured for ~4 billion years. Analysis of tRNA sequences reveals a major successful pathway in evolution of life on Earth.
tRNA-derived fragments
tRNA-derived fragments (or tRFs) are short molecules that emerge after cleavage of the mature tRNAs or the precursor transcript. Both cytoplasmic and mitochondrial tRNAs can produce fragments. There are at least four structural types of tRFs believed to originate from mature tRNAs, including the relatively long tRNA halves and short 5'-tRFs, 3'-tRFs and i-tRFs. The precursor tRNA can be cleaved to produce molecules from the 5' leader or 3' trail sequences. Cleavage enzymes include Angiogenin, Dicer, RNase Z and RNase P. Especially in the case of Angiogenin, the tRFs have a characteristically unusual cyclic phosphate at their 3' end and a hydroxyl group at the 5' end. tRFs appear to play a role in RNA interference, specifically in the suppression of retroviruses and retrotransposons that use tRNA as a primer for replication. Half-tRNAs cleaved by angiogenin are also known as tiRNAs. The biogenesis of smaller fragments, including those that function as piRNAs, are less understood.
tRFs have multiple dependencies and roles; such as exhibiting significant changes between sexes, among races and disease status. Functionally, they can be loaded on Ago and act through RNAi pathways, participate in the formation of stress granules, displace mRNAs from RNA-binding proteins or inhibit translation. At the system or the organismal level, the four types of tRFs have a diverse spectrum of activities. Functionally, tRFs are associated with viral infection, cancer, cell proliferation and also with epigenetic transgenerational regulation of metabolism.
tRFs are not restricted to humans and have been shown to exist in multiple organisms.
Two online tools are available for those wishing to learn more about tRFs: the framework for the interactive exploration of mitochondrial and nuclear tRNA fragments (MINTbase) and the relational database of Transfer RNA related Fragments (tRFdb). MINTbase also provides a naming scheme for the naming of tRFs called tRF-license plates (or MINTcodes) that is genome independent; the scheme compresses an RNA sequence into a shorter string.
Engineered tRNAs
tRNAs with modified anticodons and/or acceptor stems can be used to modify the genetic code. Scientists have successfully repurposed codons (sense and stop) to accept amino acids (natural and novel), for both initiation (see: start codon) and elongation.
In 1990, tRNA (modified from the tRNA gene metY) was inserted into E. coli, causing it to initiate protein synthesis at the UAG stop codon, as long as it is preceded by a strong Shine-Dalgarno sequence. At initiation it not only inserts the traditional formylmethionine, but also formylglutamine, as glutamyl-tRNA synthase also recognizes the new tRNA. The experiment was repeated in 1993, now with an elongator tRNA modified to be recognized by the methionyl-tRNA formyltransferase. A similar result was obtained in Mycobacterium. Later experiments showed that the new tRNA was orthogonal to the regular AUG start codon showing no detectable off-target translation initiation events in a genomically recoded E. coli strain.
tRNA biogenesis
In eukaryotic cells, tRNAs are transcribed by RNA polymerase III as pre-tRNAs in the nucleus.
RNA polymerase III recognizes two highly conserved downstream promoter sequences: the 5′ intragenic control region (5′-ICR, D-control region, or A box), and the 3′-ICR (T-control region or B box) inside tRNA genes.
The first promoter begins at +8 of mature tRNAs and the second promoter is located 30–60 nucleotides downstream of the first promoter. The transcription terminates after a stretch of four or more thymidines.
Pre-tRNAs undergo extensive modifications inside the nucleus. Some pre-tRNAs contain introns that are spliced, or cut, to form the functional tRNA molecule; in bacteria these self-splice, whereas in eukaryotes and archaea they are removed by tRNA-splicing endonucleases. Eukaryotic pre-tRNA contains bulge-helix-bulge (BHB) structure motif that is important for recognition and precise splicing of tRNA intron by endonucleases. This motif position and structure are evolutionarily conserved. However, some organisms, such as unicellular algae have a non-canonical position of BHB-motif as well as 5′- and 3′-ends of the spliced intron sequence.
The 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme.
A notable exception is in the archaeon Nanoarchaeum equitans, which does not possess an RNase P enzyme and has a promoter placed such that transcription starts at the 5′ end of the mature tRNA.
The non-templated 3′ CCA tail is added by a nucleotidyl transferase.
Before tRNAs are exported into the cytoplasm by Los1/Xpo-t, tRNAs are aminoacylated.
The order of the processing events is not conserved.
For example, in yeast, the splicing is not carried out in the nucleus but at the cytoplasmic side of mitochondrial membranes.
History
The existence of tRNA was first hypothesized by Francis Crick as the "adaptor hypothesis" based on the assumption that there must exist an adapter molecule capable of mediating the translation of the RNA alphabet into the protein alphabet. Paul C Zamecnik, Mahlon Hoagland, and Mary Louise Stephenson discovered tRNA. Significant research on structure was conducted in the early 1960s by Alex Rich and Donald Caspar, two researchers in Boston, the Jacques Fresco group in Princeton University and a United Kingdom group at King's College London. In 1965, Robert W. Holley of Cornell University reported the primary structure and suggested three secondary structures. tRNA was first crystallized in Madison, Wisconsin, by Robert M. Bock. The cloverleaf structure was ascertained by several other studies in the following years and was finally confirmed using X-ray crystallography studies in 1974. Two independent groups, Kim Sung-Hou working under Alexander Rich and a British group headed by Aaron Klug, published the same crystallography findings within a year.
Clinical relevance
Interference with aminoacylation may be useful as an approach to treating some diseases: cancerous cells may be relatively vulnerable to disturbed aminoacylation compared to healthy cells. The protein synthesis associated with cancer and viral biology is often very dependent on specific tRNA molecules. For instance, for liver cancer charging tRNA-Lys-CUU with lysine sustains liver cancer cell growth and metastasis, whereas healthy cells have a much lower dependence on this tRNA to support cellular physiology. Similarly, hepatitis E virus requires a tRNA landscape that substantially differs from that associated with uninfected cells. Hence, inhibition of aminoacylation of specific tRNA species is considered a promising novel avenue for the rational treatment of a plethora of diseases.
| Biology and health sciences | Nucleic acids | Biology |
532379 | https://en.wikipedia.org/wiki/Type%20species | Type species | In zoological nomenclature, a type species (species typica) is the species name with which the name of a genus or subgenus is considered to be permanently taxonomically associated, i.e., the species that contains the biological type specimen (or specimens). A similar concept is used for suprageneric groups and called a type genus.
In botanical nomenclature, these terms have no formal standing under the code of nomenclature, but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen (or, rarely, an illustration) which is also the type of a species name. The species name with that type can also be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, and some higher-rank names based on genus names, have such types.
In bacteriology, a type species is assigned for each genus. Whether or not currently recognized as valid, every named genus or subgenus in zoology is theoretically associated with a type species. In practice, however, there is a backlog of untypified names defined in older publications when it was not required to specify a type.
Use in zoology
A type species is both a concept and a practical system that is used in the classification and nomenclature (naming) of animals. The "type species" represents the reference species and thus "definition" for a particular genus name. Whenever a taxon containing multiple species must be divided into more than one genus, the type species automatically assigns the name of the original taxon to one of the resulting new taxa, the one that includes the type species.
The term "type species" is regulated in zoological nomenclature by article 42.3 of the International Code of Zoological Nomenclature, which defines a type species as the name-bearing type of the name of a genus or subgenus (a "genus-group name"). In the Glossary, type species is defined as
The type species permanently attaches a formal name (the generic name) to a genus by providing just one species within that genus to which the genus name is permanently linked (i.e. the genus must include that species if it is to bear the name). The species name in turn is fixed, in theory, to a type specimen.
For example, the type species for the land snail genus Monacha is Helix cartusiana, the name under which the species was first described, known as Monacha cartusiana when placed in the genus Monacha. That genus is currently placed within the family Hygromiidae. The type genus for that family is the genus Hygromia.
The concept of the type species in zoology was introduced by Pierre André Latreille.
Citing
The International Code of Zoological Nomenclature states that the original name (binomen) of the type species should always be cited. It gives an example in Article 67.1. Astacus marinus was later designated as the type species of the genus Homarus, thus giving it the name Homarus marinus . However, the type species of Homarus should always be cited using its original name, i.e. Astacus marinus , even though that is a junior synonym of Cancer grammarius .
Although the International Code of Nomenclature for algae, fungi, and plants does not contain the same explicit statement, examples make it clear that the original name is used, so that the "type species" of a genus name need not have a name within that genus. Thus in Article 10, Ex. 3, the type of the genus name Elodes is quoted as the type of the species name Hypericum aegypticum, not as the type of the species name Elodes aegyptica. (Elodes is not now considered distinct from Hypericum.)
| Biology and health sciences | Taxonomic rank | Biology |
532405 | https://en.wikipedia.org/wiki/Quantum%20number | Quantum number | In quantum physics and chemistry, quantum numbers are quantities that characterize the possible states of the system.
To fully specify the state of the electron in a hydrogen atom, four quantum numbers are needed. The traditional set of quantum numbers includes the principal, azimuthal, magnetic, and spin quantum numbers. To describe other systems, different quantum numbers are required. For subatomic particles, one needs to introduce new quantum numbers, such as the flavour of quarks, which have no classical correspondence.
Quantum numbers are closely related to eigenvalues of observables. When the corresponding observable commutes with the Hamiltonian of the system, the quantum number is said to be "good", and acts as a constant of motion in the quantum dynamics.
History
Electronic quantum numbers
In the era of the old quantum theory, starting from Max Planck's proposal of quanta in his model of blackbody radiation (1900) and Albert Einstein's adaptation of the concept to explain the photoelectric effect (1905), and until Erwin Schrödinger published his eigenfunction equation in 1926, the concept behind quantum numbers developed based on atomic spectroscopy and theories from classical mechanics with extra ad hoc constraints. Many results from atomic spectroscopy had been summarized in the Rydberg formula involving differences between two series of energies related by integer steps. The model of the atom, first proposed by Niels Bohr in 1913, relied on a single quantum number. Together with Bohr's constraint that radiation absorption is not classical, it was able to explain the Balmer series portion of Rydberg's atomic spectrum formula.
As Bohr notes in his subsequent Nobel lecture, the next step was taken by Arnold Sommerfeld in 1915. Sommerfeld's atomic model added a second quantum number and the concept of quantized phase integrals to justify them. Sommerfeld's model was still essentially two dimensional, modeling the electron as orbiting in a plane; in 1919 he extended his work to three dimensions using 'space quantization' in place of the quantized phase integrals. Karl Schwarzschild and Sommerfeld's student, Paul Epstein, independently showed that adding third quantum number gave a complete account for the Stark effect results.
A consequence of space quantization was that the electron's orbital interaction with an external magnetic field would be quantized. This seemed to be confirmed when the results of the Stern-Gerlach experiment reported quantized results for silver atoms in an inhomogeneous magnetic field. The confirmation would turn out to be premature: more quantum numbers would be needed.
The fourth and fifth quantum numbers of the atomic era arose from attempts to understand the Zeeman effect. Like the Stern-Gerlach experiment, the Zeeman effect reflects the interaction of atoms with a magnetic field; in a weak field the experimental results were called "anomalous", they diverged from any theory at the time. Wolfgang Pauli's solution to this issue was to introduce another quantum number taking only two possible values, . This would ultimately become the quantized values of the projection of spin, an intrinsic angular momentum quantum of the electron. In 1927 Ronald Fraser demonstrated that the quantization in the Stern-Gerlach experiment was due to the magnetic moment associated with the electron spin rather than its orbital angular momentum. Pauli's success in developing the arguments for a spin quantum number without relying on classical models set the stage for the development of quantum numbers for elementary particles in the remainder of the 20th century.
Bohr, with his Aufbau or "building up" principle, and Pauli with his exclusion principle connected the atom's electronic quantum numbers in to a framework for predicting the properties of atoms. When Schrödinger published his wave equation and calculated the energy levels of hydrogen, these two principles carried over to become the basis of atomic physics.
Nuclear quantum numbers
With successful models of the atom, the attention of physics turned to models of the nucleus. Beginning with Heisenberg's initial model of proton-neutron binding in 1932, Eugene Wigner introduced isospin in 1937, the first 'internal' quantum number unrelated to a symmetry in real space-time.
Connection to symmetry
As quantum mechanics developed, abstraction increased and models based on symmetry and invariance played increasing roles. Two years before his work on the quantum wave equation, Schrödinger applied the symmetry ideas originated by Emmy Noether and Hermann Weyl to the electromagnetic field. As quantum electrodynamics developed in the 1930s and 1940s, group theory became an important tool. By 1953 Chen Ning Yang had become obsessed with the idea that group theory could be applied to connect the conserved quantum numbers of nuclear collisions to symmetries in a field theory of nucleons. With Robert Mills, Yang developed a non-abelian gauge theory based on the conservation of the nuclear isospin quantum numbers.
General properties
Good quantum numbers correspond to eigenvalues of operators that commute with the Hamiltonian, quantities that can be known with precision at the same time as the system's energy. Specifically, observables that commute with the Hamiltonian are simultaneously diagonalizable with it and so the eigenvalues and the energy (eigenvalues of the Hamiltonian) are not limited by an uncertainty relation arising from non-commutativity. Together, a specification of all of the quantum numbers of a quantum system fully characterize a basis state of the system, and can in principle be measured together. Many observables have discrete spectra (sets of eigenvalues) in quantum mechanics, so the quantities can only be measured in discrete values. In particular, this leads to quantum numbers that take values in discrete sets of integers or half-integers; although they could approach infinity in some cases.
The tally of quantum numbers varies from system to system and has no universal answer. Hence these parameters must be found for each system to be analyzed. A quantized system requires at least one quantum number. The dynamics (i.e. time evolution) of any quantum system are described by a quantum operator in the form of a Hamiltonian, . There is one quantum number of the system corresponding to the system's energy; i.e., one of the eigenvalues of the Hamiltonian. There is also one quantum number for each linearly independent operator that commutes with the Hamiltonian. A complete set of commuting observables (CSCO) that commute with the Hamiltonian characterizes the system with all its quantum numbers. There is a one-to-one relationship between the quantum numbers and the operators of the CSCO, with each quantum number taking one of the eigenvalues of its corresponding operator. As a result of the different basis that may be arbitrarily chosen to form a complete set of commuting operators, different sets of quantum numbers may be used for the description of the same system in different situations.
Electron in a hydrogen-like atom
Four quantum numbers can describe an electron energy level in a hydrogen-like atom completely:
Principal quantum number ()
Azimuthal quantum number ()
Magnetic quantum number ()
Spin quantum number ()
These quantum numbers are also used in the classical description of nuclear particle states (e.g. protons and neutrons). A quantum description of molecular orbitals requires other quantum numbers, because the symmetries of the molecular system are different.
Principal quantum number
The principal quantum number describes the electron shell of an electron. The value of ranges from 1 to the shell containing the outermost electron of that atom, that is
For example, in caesium (Cs), the outermost valence electron is in the shell with energy level 6, so an electron in caesium can have an value from 1 to 6. The average distance between the electron and the nucleus increases with .
Azimuthal quantum number
The azimuthal quantum number, also known as the orbital angular momentum quantum number, describes the subshell, and gives the magnitude of the orbital angular momentum through the relation
In chemistry and spectroscopy, is called s orbital, , p orbital, , d orbital, and , f orbital.
The value of ranges from 0 to , so the first p orbital () appears in the second electron shell (), the first d orbital () appears in the third shell (), and so on:
A quantum number beginning in , describes an electron in the s orbital of the third electron shell of an atom. In chemistry, this quantum number is very important, since it specifies the shape of an atomic orbital and strongly influences chemical bonds and bond angles. The azimuthal quantum number can also denote the number of angular nodes present in an orbital. For example, for p orbitals, and thus the amount of angular nodes in a p orbital is 1.
Magnetic quantum number
The magnetic quantum number describes the specific orbital within the subshell, and yields the projection of the orbital angular momentum along a specified axis:
The values of range from to , with integer intervals.
The s subshell () contains only one orbital, and therefore the of an electron in an s orbital will always be 0. The p subshell () contains three orbitals, so the of an electron in a p orbital will be −1, 0, or 1. The d subshell () contains five orbitals, with values of −2, −1, 0, 1, and 2.
Spin magnetic quantum number
The spin magnetic quantum number describes the intrinsic spin angular momentum of the electron within each orbital and gives the projection of the spin angular momentum along the specified axis:
In general, the values of range from to , where is the spin quantum number, associated with the magnitude of particle's intrinsic spin angular momentum:
An electron state has spin number , consequently will be + ("spin up") or - "spin down" states. Since electron are fermions they obey the Pauli exclusion principle: each electron state must have different quantum numbers. Therefore, every orbital will be occupied with at most two electrons, one for each spin state.
The Aufbau principle and Hund's Rules
A multi-electron atom can be modeled qualitatively as a hydrogen like atom with higher nuclear charge and correspondingly more electrons. The occupation of the electron states in such an atom can be predicted by the Aufbau principle and Hund's empirical rules for the quantum numbers. The Aufbau principle fills orbitals based on their principal and azimuthal quantum numbers (lowest first, with lowest breaking ties; Hund's rule favors unpaired electrons in the outermost orbital). These rules are empirical but they can be related to electron physics.
Spin-orbit coupled systems
When one takes the spin–orbit interaction into consideration, the and operators no longer commute with the Hamiltonian, and the eigenstates of the system no longer have well-defined orbital angular momentum and spin. Thus another set of quantum numbers should be used. This set includes
The total angular momentum quantum number: which gives the total angular momentum through the relation
The projection of the total angular momentum along a specified axis: analogous to the above and satisfies both and
ParityThis is the eigenvalue under reflection: positive (+1) for states which came from even and negative (−1) for states which came from odd . The former is also known as even parity and the latter as odd parity, and is given by
For example, consider the following 8 states, defined by their quantum numbers:
{| style="border: none; border-spacing: 1em 0" class="wikitable"
!
!
!
!
!
| rowspan=9 style="border:0px;" |
!
!
!
|-align=right
! (1)
| 2 || 1 || 1 || + || || ||
|-align=right
! (2)
| 2 || 1 || 1 || − || || ||
|-align=right
! (3)
| 2 || 1 || 0 || + || || ||
|-align=right
! (4)
| 2 || 1 || 0 || − || || || −
|-align=right
! (5)
| 2 || 1 || −1 || + || || || −
|-align=right
! (6)
| 2 || 1 || −1 || − || || || −
|-align=right
! (7)
| 2 || 0 || 0 || + || || − ||
|-align=right
! (8)
| 2 || 0 || 0 || − || || − || −
|}
The quantum states in the system can be described as linear combination of these 8 states. However, in the presence of spin–orbit interaction, if one wants to describe the same system by 8 states that are eigenvectors of the Hamiltonian (i.e. each represents a state that does not mix with others over time), we should consider the following 8 states:
{| class="wikitable"
! || || parity ||
|-
| || align=right | || align=right | odd || coming from state (1) above
|-
| || align=right | || align=right | odd || coming from states (2) and (3) above
|-
| || align=right | −|| align=right | odd || coming from states (4) and (5) above
|-
| || align=right | −|| align=right | odd || coming from state (6) above
|-
| || align=right | || align=right | odd || coming from states (2) and (3) above
|-
| || align=right | −|| align=right | odd || coming from states (4) and (5) above
|-
| || align=right | || align=right | even || coming from state (7) above
|-
| || align=right | −|| align=right | even || coming from state (8) above
|}
Atomic nuclei
In nuclei, the entire assembly of protons and neutrons (nucleons) has a resultant angular momentum due to the angular momenta of each nucleon, usually denoted . If the total angular momentum of a neutron is and for a proton is (where for protons and neutrons happens to be again (see note)), then the nuclear angular momentum quantum numbers are given by:
Note: The orbital angular momenta of the nuclear (and atomic) states are all integer multiples of ħ while the intrinsic angular momentum of the neutron and proton are half-integer multiples. It should be immediately apparent that the combination of the intrinsic spins of the nucleons with their orbital motion will always give half-integer values for the total spin, , of any odd-A nucleus and integer values for any even-A nucleus.
Parity with the number is used to label nuclear angular momentum states, examples for some isotopes of hydrogen (H), carbon (C), and sodium (Na) are;
{|
| style="text-align:right;" | || = ()+|| || style="text-align:right;" | || = ()− || || style="text-align:right;" | || = 2+
|-
| style="text-align:right;" | || = 1+|| || style="text-align:right;" | || = 0+|| || style="text-align:right;" | || = ()+
|-
| style="text-align:right;" | || = ()+|| || style="text-align:right;" | || = ()−|| || style="text-align:right;" | || = 3+
|-
| || || || style="text-align:right;" | || = 0+|| || style="text-align:right;" | || = ()+
|-
| || || || style="text-align:right;" | || = ()−|| || style="text-align:right;" | || = 4+
|-
| || || || style="text-align:right;" | || = 0+|| || style="text-align:right;" | || = ()+
|-
| || || || style="text-align:right;" | || = ()+|| || style="text-align:right;" | || = 3+
|-
|}
The reason for the unusual fluctuations in , even by differences of just one nucleon, are due to the odd and even numbers of protons and neutrons – pairs of nucleons have a total angular momentum of zero (just like electrons in orbitals), leaving an odd or even number of unpaired nucleons. The property of nuclear spin is an important factor for the operation of NMR spectroscopy in organic chemistry, and MRI in nuclear medicine, due to the nuclear magnetic moment interacting with an external magnetic field.
Elementary particles
Elementary particles contain many quantum numbers which are usually said to be intrinsic to them. However, it should be understood that the elementary particles are quantum states of the standard model of particle physics, and hence the quantum numbers of these particles bear the same relation to the Hamiltonian of this model as the quantum numbers of the Bohr atom does to its Hamiltonian. In other words, each quantum number denotes a symmetry of the problem. It is more useful in quantum field theory to distinguish between spacetime and internal symmetries.
Typical quantum numbers related to spacetime symmetries are spin (related to rotational symmetry), the parity, C-parity and T-parity (related to the Poincaré symmetry of spacetime). Typical internal symmetries are lepton number and baryon number or the electric charge. (For a full list of quantum numbers of this kind see the article on flavour.)
Multiplicative quantum numbers
Most conserved quantum numbers are additive, so in an elementary particle reaction, the sum of the quantum numbers should be the same before and after the reaction. However, some, usually called a parity, are multiplicative; i.e., their product is conserved. All multiplicative quantum numbers belong to a symmetry (like parity) in which applying the symmetry transformation twice is equivalent to doing nothing (involution).
| Physical sciences | Quantum mechanics | Physics |
532481 | https://en.wikipedia.org/wiki/Principal%20quantum%20number | Principal quantum number | In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers assigned to each electron in an atom to describe that electron's state. Its values are natural numbers (from one) making it a discrete variable.
Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number ℓ, the magnetic quantum number ml, and the spin quantum number s.
Overview and history
As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n the electron is farther from the nucleus, on average. For each value of n there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 inclusively, hence higher-n electron states are more numerous. Accounting for two states of spin, each n-shell can accommodate up to 2n2 electrons.
In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n, leading to degenerate energy levels for each n > 1. In more complex systems—those having forces other than the nucleus–electron Coulomb force—these levels split. For multielectron atoms this splitting results in "subshells" parametrized by ℓ. Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 (boron) and fails completely on potassium (Z = 19) and afterwards.
The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, ℓ, m, and s specify the complete and unique quantum state of a single electron in an atom, called its wave function or orbital. Two electrons belonging to the same atom cannot have the same values for all four quantum numbers, due to the Pauli exclusion principle. The Schrödinger wave equation reduces to the three equations that when solved lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The principal quantum number arose in the solution of the radial part of the wave equation as shown below.
The Schrödinger wave equation describes energy eigenstates with corresponding real numbers En and a definite total energy, the value of En. The bound state energies of the electron in the hydrogen atom are given by:
The parameter n can take only positive integer values. The concept of energy levels and notation were taken from the earlier Bohr model of the atom. Schrödinger's equation developed the idea from a flat two-dimensional Bohr atom to the three-dimensional wavefunction model.
In the Bohr model, the allowed orbits were derived from quantized (discrete) values of orbital angular momentum, L according to the equation
where n = 1, 2, 3, ... and is called the principal quantum number, and h is the Planck constant. This formula is not correct in quantum mechanics as the angular momentum magnitude is described by the azimuthal quantum number, but the energy levels are accurate and classically they correspond to the sum of potential and kinetic energy of the electron.
The principal quantum number n represents the relative overall energy of each orbital. The energy level of each orbital increases as its distance from the nucleus increases. The sets of orbitals with the same n value are often referred to as an electron shell.
The minimum energy exchanged during any wave–matter interaction is the product of the wave frequency multiplied by the Planck constant. This causes the wave to display particle-like packets of energy called quanta. The difference between energy levels that have different n determine the emission spectrum of the element.
In the notation of the periodic table, the main shells of electrons are labeled:
based on the principal quantum number.
The principal quantum number is related to the radial quantum number, nr, by:
where ℓ is the azimuthal quantum number and nr is equal to the number of nodes in the radial wavefunction.
The definite total energy for a particle motion in a common Coulomb field and with a discrete spectrum, is given by:
where is the Bohr radius.
This discrete energy spectrum resulted from the solution of the quantum mechanical problem on the electron motion in the Coulomb field, coincides with the spectrum that was obtained with the help application of the Bohr–Sommerfeld quantization rules to the classical equations. The radial quantum number determines the number of nodes of the radial wave function R(r).
Values
In chemistry, values n = 1, 2, 3, 4, 5, 6, 7 are used in relation to the electron shell theory, with expected inclusion of n = 8 (and possibly 9) for yet-undiscovered period 8 elements. In atomic physics, higher n sometimes occur for description of excited states. Observations of the interstellar medium reveal atomic hydrogen spectral lines involving n on order of hundreds; values up to 766 were detected.
| Physical sciences | Atomic physics | Physics |
532508 | https://en.wikipedia.org/wiki/Breech%20birth | Breech birth | A breech birth is when a baby is born bottom first instead of head first, as is normal. Around 3–5% of pregnant women at term (37–40 weeks pregnant) have a breech baby. Due to their higher than average rate of possible complications for the baby, breech births are generally considered higher risk. Breech births also occur in many other mammals such as dogs and horses, see veterinary obstetrics.
Most babies in the breech position are delivered via caesarean section because it is seen as safer than being born vaginally. Doctors and midwives in the developing world often lack many of the skills required to safely assist women giving birth to a breech baby vaginally. Also, delivering all breech babies by caesarean section in developing countries is difficult to implement as there are not always resources available to provide this service.
Cause
With regard to the fetal presentation during pregnancy, three periods have been distinguished.
During the first period, which lasts until the 24th gestational week, the incidence of a longitudinal lie increases, with equal proportions of breech or cephalic presentations from this lie. This period is characterized by frequent changes of presentations. The fetuses in breech presentation during this period have the same probability for breech and cephalic presentation at delivery.
During the second period, lasting from the 25th to the 35th gestational week, the incidence of cephalic presentation increases, with a proportional decrease of breech presentation. The second period is characterized by a higher than random probability that the fetal presentation during this period will also be present at the time of delivery. The increase of this probability is gradual and identical for breech and cephalic presentations during this period.
In the third period, from the 36th gestational week onward, the incidence of cephalic and breech presentations remain stable, i.e. breech presentation around 3–4% and cephalic presentation approximately 95%. In the general population, incidence of breech presentation at preterm corresponds to the incidence of breech presentation when birth occurs.
A breech presentation at delivery occurs when the fetus does not turn to a cephalic presentation. This failure to change presentation can result from endogenous and exogenous factors. Endogenous factors involve fetal inability to adequately move, whereas exogenous factors refer to insufficient intrauterine space available for fetal movements.
The incidence of breech presentation is affected by both maternal and fetal diseases and medical conditions. When these factors are present, the probability of breech presentation is between 4% and 50%.
Rates in various medical conditions
Fetal entities:
First twin 17–30%
Second twin 28–39%
Stillborn 26%
Prader–Willi syndrome 50%, Werdnig–Hoffman syndrome 10%
Smith–Lemli–Opitz syndrome 40%
Fetal alcohol syndrome 40%
Potter anomaly 36%
Zellweger syndrome 27%
Myotonic dystrophy 21%, 13 trisomy syndrome 12%
18 trisomy syndrome 43%
21 trisomy syndrome 5%
de Lange syndrome 10%
Anencephalus 6–18%, Spina bifida 20–30%
Congenital hydrocephalus 24–37%
Osteogenesis imperfecta 33.3%
Amyoplasia 33.3%
Achondrogenesis 33.3%
Amelia 50%
Craniosynostosis 8%
Sacral agenesis 30.4%
Arthrogriposis multiplex congenita 33.3%
Congenital dislocation of the hip 33.3%
Hereditary sensory neuropathy type III 25%
Centronuclear myopathy 16.7%
Multiple pituitary hormone deficiency 50%
Isolated pituitary hormone deficiency 20%
Ectopic posterior pituitary gland 33.3%
Congenital bilateral perisilvian syndrome 33.3%
Symmetric fetal growth restriction 40%
Asymmetric fetal growth restriction 40%
Nonimmune hydrops fetalis 15%
Atresia ani 18.2%
Microcephalus 15.4%
Omphalocele 12.5%
Prematurity 40%
Placental and amniotic fluid entities:
Amniotic sheet perpendicular to the placenta 50%
Cornual–fundal implantation of the placenta 30%
Placenta previa 12.5%
Oligohydramnios 17%
Polyhydramnios 15.8%
Maternal entities:
Uterus arcuatus 22.6%
Uterus unicornuatus 33.3%
Uterus bicornuatus 34.8%
Uterus didelphys 30–41%
Uterus septus 45.8%
Leiomyoma uteri 9–20%
Spinal cord injury 10%
Carriers of Duchenne muscular dystrophy 17%
Combination of two medical entities:
First twin in uterus with two bodies 14.29%
Second twin in uterus with two bodies 18.52%.
Also, women with previous caesarean deliveries have a risk of breech presentation at term twice that of women with previous vaginal deliveries.
The highest possible probability of breech presentation of 50% indicates that breech presentation is a consequence of random filling of the intrauterine space, with the same probability of breech and cephalic presentation in a longitudinally elongated uterus.
Types
Types of breech depend on how the baby's legs are lying.
A frank breech (otherwise known as an extended breech) is where the baby's legs are up next to its abdomen, with its knees straight and its feet next to its ears. This is the most common type of breech.
A complete breech (or flexed breech) is when the baby appears as though it is sitting crossed-legged with its legs bent at the hips and knees.
A footling breech is when one or both of the baby's feet are born first instead of the pelvis. This is more common in babies born prematurely or before their due date.
A kneeling breech is when the baby is born knees first.
In addition to the above, breech births in which the sacrum is the fetal denominator can be classified by the position of a fetus. Thus sacro-anterior, sacro-transverse and sacro-posterior positions all exist but left sacro-anterior is the most common presentation. Sacro-anterior indicates an easier delivery compared to other forms.
Complications
Umbilical cord prolapse may occur, particularly in the complete, footling, or kneeling breech. This is caused by the lowermost parts of the baby not completely filling the space of the dilated cervix. When the waters break the amniotic sac, it is possible for the umbilical cord to drop down and become compressed. This complication severely diminishes oxygen flow to the baby, so the baby needs to be delivered immediately so that he or she can breathe. In these circumstances a caesarean section is likely to be recommended. If there is a delay in delivery, the brain can be damaged. Among full-term, head-down babies, cord prolapse is quite rare, occurring in 0.4 percent. Among frank breech babies the incidence is 0.5 percent, among complete breeches 5 percent, and among footling breeches 15 percent.
Head entrapment is caused by the failure of the fetal head to negotiate the maternal midpelvis. At full term, the fetal bitrochanteric diameter (the distance between the outer points of the hips) is about the same as the biparietal diameter (the transverse diameter of the skull)—in simplest terms, the size of the hips is the same as the size of the head. The relatively larger buttocks dilate the cervix as effectively as the head does in the typical head-down presentation. In contrast, the relative head size of a preterm baby is greater than the fetal buttocks. If the baby is preterm, it may be possible for the baby's body to emerge while the cervix has not dilated enough for the head to emerge.
Because the umbilical cord—the baby's oxygen supply—is significantly compressed while the head is in the pelvis during a breech birth, it is important that the delivery of the aftercoming fetal head not be delayed. If the arm is extended alongside the head, delivery will not occur. If this occurs, the Løvset manoeuvre may be employed, or the arm may be manually brought to a position in front of the chest. The Løvset manoeuvre involves rotating the fetal body by holding the fetal pelvis. Twisting the body such that an arm trails behind the shoulder, it will tend to cross down over the face to a position where it can be reached by the obstetrician's finger, and brought to a position below the head. A similar rotation in the opposite direction is made to deliver the other arm. In order to present the smallest diameter (9.5 cm) to the pelvis, the baby's head must be flexed (chin to chest). If the head is in a deflexed position, the risk of entrapment is increased. Uterine contractions and maternal muscle tone encourage the head to flex.
Oxygen deprivation may occur from either cord prolapse or prolonged compression of the cord during birth, as in head entrapment. If oxygen deprivation is prolonged, it may cause permanent neurological damage (for instance, cerebral palsy) or death. It has been suggested that a fast vaginal delivery would mean the risk of stopping baby's oxygen supply is reduced. However, there is not enough research to show this and a quick delivery might cause more harm to the baby than a conservative approach to the birth.
Injury to the brain and skull may occur due to the rapid passage of the baby's head through the mother's pelvis. This causes rapid decompression of the baby's head. In contrast, a baby going through labor in the head-down position usually experiences gradual molding (temporary reshaping of the skull) over the course of a few hours. This sudden compression and decompression in breech birth may cause no problems at all, but it can injure the brain. This injury is more likely in preterm babies. The fetal head may be controlled by a special two-handed grip called the Mauriceau–Smellie–Veit maneuver or the elective application of forceps. This will be of value in controlling the rate of delivery of the head and reducing decompression. Related to potential head trauma, researchers have identified a relationship between breech birth and autism.
Squeezing the baby's abdomen can damage internal organs. Positioning the baby incorrectly while using forceps to deliver the after coming head can damage the spine or spinal cord. It is important for the birth attendant to be knowledgeable, skilled, and experienced with all variations of breech birth.
Factors influencing safety
Birth attendant's skill (and experience with breech birth) – The skill of the doctor or midwife and the number of breech births previously assisted is of crucial importance. Many of the dangers in vaginal birth for breech babies come from mistakes made by birth attendants. With the majority of breech babies being delivered by cesarean section, there is more risk that birth attendants will lose their skills in delivering breech babies and therefore increase the risk of harm to the baby during vaginal delivery.
Type of breech presentation – the frank breech has the most favorable outcomes in a vaginal birth, with many studies suggesting no difference in outcome compared to head-down babies. (Some studies, however, find that planned caesarean sections for all breech babies improve outcome. The difference may rest in part on the skill of the doctors who delivered babies in different studies.) Complete breech presentation is the next most favorable position, but these babies sometimes shift and become footling breeches during labour. Footling and kneeling breeches have a higher risk of cord prolapse and head entrapment.
Parity – Parity refers to the number of times a woman has given birth before. If a woman has given birth vaginally, her pelvis has "proven" it is big enough to allow a baby of that baby's size to pass through it. However, a head-down baby's head often molds (shifts its shape to fit the maternal pelvis) and so may present a smaller diameter than the same-size baby born breech.
Fetal size in relation to maternal pelvic size – If the mother's pelvis is roomy and the baby is not large, this is favorable for vaginal breech delivery. However, prenatal estimates of the size of the baby and the size of the pelvis are unreliable.
Hyperextension of the fetal head – this can be evaluated with ultrasound. Less than 5% of breech babies have their heads in the "star-gazing" position, the face looking straight upwards and the back of the head resting against the back of the neck. Caesarean delivery is absolutely necessary because vaginal birth with the baby's head in this position confers a high risk of spinal cord trauma and death.
Maturity of the baby – Premature babies appear to be at higher risk of complications if delivered vaginally than if delivered by caesarean section.
Progress of labor – A spontaneous, normally progressing, straightforward labor requiring no intervention is a favorable sign.
Second twins – If a first twin is born head down and the second twin is breech, the chances are good that the second twin can have a safe breech birth.
Management
As in labour with a baby in a normal head-down position, uterine contractions typically occur at regular intervals and gradually the cervix begins to thin and open. In the more common breech presentations, the baby's bottom (rather than feet or knees) is what is first to descend through the maternal pelvis and emerge from the vagina.
At the beginning of labour, the baby is generally in an oblique position, facing either the right or left side. The baby's bottom is the same size in the term baby as the baby's head. Descent is thus as for the presenting fetal head and delay in descent is a cardinal sign of possible problems with the delivery of the head.
In order to begin the birth, a descent of the podalic pole along with compaction and internal rotation needs to occur. This happens when the mother's pelvic floor muscles cause the baby to turn so that it can be born with one hip directly in front of the other. At this point, the baby is facing one of the mother's inner thighs. Then, the shoulders follow the same path as the hips did. At this time the baby usually turns to face the mother's back. Next occurs external rotation, which is when the shoulders emerge as the baby's head enters the maternal pelvis. The combination of maternal muscle tone and uterine contractions causes the baby's head to flex, chin to chest. Then the back of the baby's head emerges and finally the face.
Due to the increased pressure during labour and birth, it is normal for the baby's leading hip to be bruised and genitalia to be swollen. Babies who assumed the frank breech position in utero may continue to hold their legs in this position for some days after birth.
Caesarean or vaginal delivery
When a baby is born bottom first there is more risk that the birth will not be straightforward and that the baby could be harmed. For example, when the baby's head passes through the mother's pelvis the umbilical cord can be compressed which prevents delivery of oxygenated blood to the baby. Due to this and other risks, babies in breech position are often born by a planned caesarean section in developed countries.
Caesarean section reduces the risk of harm or death for the baby if the baby is in breech position but does increase the risk of harm to the mother compared with a vaginal delivery. It is best if the baby is in a head-down position so that they can be born vaginally with less risk of harm to both mother and baby. The next section is looking at external cephalic version (ECV), which is a method that can help the baby turn from a breech position to a head-down position.
Vaginal birth of a breech baby has its risks but caesarean sections are not always available or possible, a mother might arrive in the hospital at a late stage of her labour or may choose not to have a caesarean section. In these cases, it is important that the clinical skills needed to deliver breech babies are not lost so that mothers and babies are as safe as possible. Compared with developed countries, planned caesarean sections have not produced as good results in developing countries – it is suggested that this is due to more breech vaginal deliveries being performed by experienced, skilled practitioners in these settings.
Twin breech
In twin pregnancies, it is very common for one or both babies to be in the breech position. Most often twin babies do not have the chance to turn around because they are born prematurely. If both babies are in the breech position and the mother has gone into labour early, a cesarean section may be the best option. About 30–40% of twin pregnancies result in only one baby being in the breech position. If this is the case, the babies can be born vaginally. After the first baby who is not in the breech position is delivered, the baby who is presented in the breech position may turn itself around, if this does not happen another procedure may performed called the breech extraction. The breech extraction is the procedure that involves the obstetrician grabbing the second twin's feet and pulling him/her into the birth canal. This will help with delivering the second twin vaginally. However, if the second twin is larger than the first, complications with delivering the second twin vaginally may arise and a cesarean section should be offered. At times, the first twin (the twin closest to the birth canal) can be in the breech position with the second twin being in the cephalic position (vertical). When this occurs, risks of complications are higher than normal. In particular, a serious complication is known as Locked twins. This is when both babies interlock their chins during labour. When this happens an urgent cesarean section is recommended.
Turning the baby
Turning the baby, technically known as external cephalic version (ECV), is when the baby is turned by gently pressing the mother's abdomen to push the baby from a bottom first position, to a head first position. In some circumstances, it may be necessary to press with more force. ECV does not always work, but it does improve the mother's chances of giving birth to her baby vaginally and avoiding a cesarean section. The World Health Organisation recommends that women should have a planned cesarean section only if an ECV has been tried and did not work.
Women who have an ECV when they are 36–40 weeks pregnant are more likely to have a vaginal delivery and less likely to have a cesarean section than those who do not have an ECV. Turning the baby before this time makes a head first birth more likely but ECV before the due date can increase the risk of early or premature birth which can cause problems to the baby.
There are treatments that can be used which might affect the success of an ECV. Drugs called beta-stimulant tocolytics help the woman's muscles to relax so that the pressure during the ECV does not have to be so great. Giving the woman these drugs before the ECV improves the chances of her having a vaginal delivery because the baby is more likely to turn and stay head down. Other treatments such as using sound, pain relief drugs such as epidural, increasing the fluid around the baby and increasing the amount of fluids to the woman before the ECV could all effect its success but there is not enough research to make this clear.
Mechanism of Labor in Breech Presentation
In a breech delivery, the process involves a sequential progression of movements for the buttocks, shoulders, and head to facilitate the baby's passage through the birth canal. Here's a breakdown of each stage:
A. Buttocks
Engagement:
The buttocks engage in the oblique diameter of the pelvic inlet.
The bitrochanteric diameter (approximately 10 cm) leads during this phase.
Descent:
The buttocks descend until the anterior buttock reaches the pelvic floor.
Internal Rotation:
The anterior buttock rotates to align behind the symphysis pubis.
Further Descent and Lateral Flexion:
The anterior hip passes under the symphysis pubis.
This is followed by the posterior hip, enabling the delivery of the trunk and lower limbs.
Restitution:
After delivery, the buttocks return to their original oblique position.
B. Shoulders
Engagement:
The shoulders engage with the bisacromial diameter (approximately 12 cm) aligning in the same oblique diameter as the buttocks.
Descent and Internal Rotation:
The shoulders rotate into the anteroposterior diameter of the pelvic outlet.
Simultaneously, the fetal trunk externally rotates.
Delivery:
The posterior shoulder delivers first, followed by the anterior shoulder as the trunk flexes.
Restitution and External Rotation:
The shoulders rotate back to realign the fetal trunk in a dorsoanterior position.
C. Head
Engagement:
The fetal head engages in either the opposite oblique or transverse diameter.
The suboccipitofrontal diameter (approximately 10 cm) leads this phase.
Descent with Flexion:
As the head descends, flexion intensifies.
Internal Rotation:
The occiput rotates anteriorly, positioning itself behind the symphysis pubis.
Descent and Flexion for Delivery:
The subocciput hinges beneath the symphysis pubis.
Delivery of the head occurs through flexion, with the chin, mouth, nose, forehead, and occiput emerging sequentially.
Expulsion:
The final expulsion of the head depends on maternal bearing-down efforts rather than uterine contractions.
Notable cases
Chesa Boudin
Jordan Brady
Becky Garrison
Billy Joel
Jerry Lee Lewis
Bret Michaels
Nero
Tatum O'Neal
David Shields
Frank Sinatra
Wilhelm II, German Emperor
Pedro Zamora
Frank Zappa
| Biology and health sciences | Human reproduction | Biology |
532570 | https://en.wikipedia.org/wiki/Endothelium | Endothelium | The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between vessels and tissues and control the flow of substances and fluid into and out of a tissue. This controls the passage of materials and the transit of white blood cells into and out of the bloodstream. Excessive or prolonged increases in permeability of the endothelium, as in cases of chronic inflammation, may lead to tissue swelling (edema). Altered barrier function is also implicated in cancer extravasation.
Endothelial cells are involved in many other aspects of vessel function, including:
Blood clotting (thrombosis and fibrinolysis). Under normal conditions, the endothelium provides a surface on which blood does not clot, because it contains and expresses substances that prevent clotting, including heparan sulfate which acts as a cofactor for activating antithrombin, a protein that inactivates several factors in the coagulation cascade.
Inflammation. Endothelial cells actively signal to white blood cells of the immune system during inflammation
Formation of new blood vessels (angiogenesis).
Constriction and enlargement of the blood vessel, called vasoconstriction and vasodilation, and hence the control of blood pressure
Blood vessel formation
The endothelium is involved in the formation of new blood vessels, called angiogenesis. Angiogenesis is a crucial process for development of organs in the embryo and fetus, as well as repair of damaged areas. The process is triggered by decreased tissue oxygen (hypoxia) or insufficient oxygen tension leading to the new development of blood vessels lined with endothelial cells. Angiogenesis is regulated by signals that promote and decrease the process. These pro- and antiangiogenic signals including integrins, chemokines, angiopoietins, oxygen sensing agents, junctional molecules and endogenous inhibitors. Angiopoietin-2 works with VEGF to facilitate cell proliferation and migration of endothelial cells.
The general outline of angiogenesis is
activating signals binding to surface receptors of vascular endothelial cells.
activated endothelial cells release proteases leading to the degradation of the basement membrane
endothelial cells are freed to migrate from the existing blood vessels and begin to proliferate to form extensions towards the source of the angiogenic stimulus.
Host immune response
Endothelial cells express a variety of immune genes in an organ-specific manner. These genes include critical immune mediators and proteins that facilitate cellular communication with hematopoietic immune cells. Endothelial cells encode important features of the structural cell immune response in the epigenome and can therefore respond swiftly to immunological challenges. The contribution to host immunity by non-hematopoietic cells, such as endothelium, is called “structural immunity”.
Clinical significance
Endothelial dysfunction, or the loss of proper endothelial function, is a hallmark for vascular diseases, and is often regarded as a key early event in the development of atherosclerosis. Impaired endothelial function, causing hypertension and thrombosis, is often seen in patients with coronary artery disease, diabetes mellitus, hypertension, hypercholesterolemia, as well as in smokers. Endothelial dysfunction has also been shown to be predictive of future adverse cardiovascular events including stroke, heart disease, and is also present in inflammatory disease such as rheumatoid arthritis, diabetes, and systemic lupus erythematosus.
Endothelial dysfunction is a result of changes in endothelial function. After fat (lipid) accumulation and when stimulated by inflammation, endothelial cells become activated, which is characterized by the expression of molecules such as E-selectin, VCAM-1 and ICAM-1, which stimulate the adhesion of immune cells. Additionally, transcription factors, which are substances which act to increase the production of proteins within cells, become activated; specifically AP-1 and NF-κB, leading to increased expression of cytokines such as IL-1, TNFα and IFNγ, which promotes inflammation. This state of endothelial cells promotes accumulation of lipids and lipoproteins in the intima, leading to atherosclerosis, and the subsequent recruitment of white blood cells and platelets, as well as proliferation of smooth muscle cells, leading to the formation of a fatty streak. The lesions formed in the intima, and persistent inflammation lead to desquamation of endothelium, which disrupts the endothelial barrier, leading to injury and consequent dysfunction. In contrast, inflammatory stimuli also activate NF-κB-induced expression of the deubiquitinase A20 (TNFAIP3), which has been shown to intrinsically repair the endothelial barrier.
One of the main mechanisms of endothelial dysfunction is the diminishing of nitric oxide, often due to high levels of asymmetric dimethylarginine, which interfere with the normal L-arginine-stimulated nitric oxide synthesis and so leads to hypertension. The most prevailing mechanism of endothelial dysfunction is an increase in reactive oxygen species, which can impair nitric oxide production and activity via several mechanisms. The signalling protein ERK5 is essential for maintaining normal endothelial cell function. A further consequence of damage to the endothelium is the release of pathological quantities of von Willebrand factor, which promote platelet aggregation and adhesion to the subendothelium, and thus the formation of potentially fatal thrombi.
Angiosarcoma is cancer of the endothelium and is rare with only 300 cases per year in the US. However it generally has poor prognosis with a five-year survival rate of 35%.
Research
Endothelium in cancer
It has been recognised that endothelial cells building tumour vasculature have distinct morphological characteristics, different origin compared to physiological endothelium, and distinct molecular signature, which gives an opportunity for implementation of new biomarkers of tumour angiogenesis and could provide new anti-angiogenic druggable targets.
Endothelium in diet
A healthy diet abundant in fruits and vegetables has a beneficial impact on endothelial function, whilst a diet high in red and processed meats, fried foods, refined grains and processed sugar increases adhesion endothelial cells and atherogenic promoters. High-fat diets adversely affect the endothelial function.
A Mediterranean diet has been found to improve endothelial function in adults which can reduce risk of cardiovascular disease. Walnut consumption improves endothelial function.
Endothelium in Covid-19
In April 2020, the presence of viral elements in endothelial cells of 3 patients who had died of COVID-19 was reported for the first time. The researchers from the University of Zurich and Harvard Medical School considered these findings to be a sign of a general endotheliitis in different organs, an inflammatory response of the endothelium to the infection that can lead or at least contribute to multi-organ failure in Covid-19 patients with comorbidities such as diabetes mellitus, hypertension and cardiovascular disease.
History
In 1865, the Swiss anatomist Wilhelm His Sr. first coined the term “endothelium”. In 1958, A. S. Todd of the University of St Andrews demonstrated that endothelium in human blood vessels have fibrinolytic activity.
| Biology and health sciences | Circulatory system | Biology |
532573 | https://en.wikipedia.org/wiki/Azimuthal%20quantum%20number | Azimuthal quantum number | In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes aspects of the angular shape of the orbital. The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ).
For a given value of the principal quantum number (electron shell), the possible values of are the integers from 0 to . For instance, the shell has only orbitals with , and the shell has only orbitals with , and .
For a given value of the azimuthal quantum number , the possible values of the magnetic quantum number are the integers from to , including 0. In addition, the spin quantum number can take two distinct values. The set of orbitals associated with a particular value of are sometimes collectively called a subshell.
While originally used just for isolated atoms, atomic-like orbitals play a key role in the configuration of electrons in compounds including gases, liquids and solids. The quantum number plays an important role here via the connection to the angular dependence of the spherical harmonics for the different orbitals around each atom.
Nomenclature
The term "azimuthal quantum number" was introduced by Arnold Sommerfeld in 1915 as part of an ad hoc description of the energy structure of atomic spectra. Only later with the quantum model of the atom was it understood that this number, , arises from quantization of orbital angular momentum. Some textbooks and the ISO standard 80000-10:2019 call the orbital angular momentum quantum number.
The energy levels of an atom in an external magnetic field depend upon the value so it is sometimes called the magnetic quantum number.
The lowercase letter , is used to denote the orbital angular momentum of a single particle. For a system with multiple particles, the capital letter is used.
Relation to atomic orbitals
There are four quantum numbersn, ℓ, mℓ, ms connected with the energy states of an isolated atom's electrons. These four numbers specify the unique and complete quantum state of any single electron in the atom, and they combine to compose the electron's wavefunction, or orbital.
When solving to obtain the wave function, the Schrödinger equation resolves into three equations that lead to the first three quantum numbers, meaning that the three equations are interrelated. The azimuthal quantum number arises in solving the polar part of the wave equationrelying on the spherical coordinate system, which generally works best with models having sufficient aspects of spherical symmetry.
An electron's angular momentum, , is related to its quantum number by the following equation:
where is the reduced Planck constant, is the orbital angular momentum operator and is the wavefunction of the electron. The quantum number is always a non-negative integer: 0, 1, 2, 3, etc. (Notably, has no real meaning except in its use as the angular momentum operator; thus, it is standard practice to use the quantum number when referring to angular momentum).
Atomic orbitals have distinctive shapes, (see top graphic) in which letters, s, p, d, f, etc., (employing a convention originating in spectroscopy) denote the shape of the atomic orbital. The wavefunctions of these orbitals take the form of spherical harmonics, and so are described by Legendre polynomials. The several orbitals relating to the different (integer) values of ℓ are sometimes called sub-shellsreferred to by lowercase Latin letters chosen for historical reasonsas shown in the table "Quantum subshells for the azimuthal quantum number".
Each of the different angular momentum states can take 2(2ℓ + 1) electrons. This is because the third quantum number mℓ (which can be thought of loosely as the quantized projection of the angular momentum vector on the z-axis) runs from −ℓ to ℓ in integer units, and so there are 2ℓ + 1 possible states. Each distinct n, ℓ, mℓ orbital can be occupied by two electrons with opposing spins (given by the quantum number ms = ±), giving 2(2ℓ + 1) electrons overall. Orbitals with higher ℓ than given in the table are perfectly permissible, but these values cover all atoms so far discovered.
For a given value of the principal quantum number n, the possible values of ℓ range from 0 to ; therefore, the shell only possesses an s subshell and can only take 2 electrons, the shell possesses an s and a p subshell and can take 8 electrons overall, the shell possesses s, p, and d subshells and has a maximum of 18 electrons, and so on.
A simplistic one-electron model results in energy levels depending on the principal number alone. In more complex atoms these energy levels split for all , placing states of higher ℓ above states of lower ℓ. For example, the energy of 2p is higher than of 2s, 3d occurs higher than 3p, which in turn is above 3s, etc. This effect eventually forms the block structure of the periodic table. No known atom possesses an electron having ℓ higher than three (f) in its ground state.
The angular momentum quantum number, ℓ and the corresponding spherical harmonic govern the number of planar nodes going through the nucleus. A planar node can be described in an electromagnetic wave as the midpoint between crest and trough, which has zero magnitudes. In an s orbital, no nodes go through the nucleus, therefore the corresponding azimuthal quantum number ℓ takes the value of 0. In a p orbital, one node traverses the nucleus and therefore ℓ has the value of 1. has the value .
Depending on the value of n, there is an angular momentum quantum number ℓ and the following series. The wavelengths listed are for a hydrogen atom:
Addition of quantized angular momenta
Given a quantized total angular momentum that is the sum of two individual quantized angular momenta and ,
the quantum number associated with its magnitude can range from to in integer steps
where and are quantum numbers corresponding to the magnitudes of the individual angular momenta.
Total angular momentum of an electron in the atom
Due to the spin–orbit interaction in an atom, the orbital angular momentum no longer commutes with the Hamiltonian, nor does the spin. These therefore change over time. However the total angular momentum does commute with the one-electron Hamiltonian and so is constant. is defined as
being the orbital angular momentum and the spin. The total angular momentum satisfies the same commutation relations as orbital angular momentum, namely
from which it follows that
where stand for , , and .
The quantum numbers describing the system, which are constant over time, are now and , defined through the action of on the wavefunction
So that is related to the norm of the total angular momentum and to its projection along a specified axis. The j number has a particular importance for relativistic quantum chemistry, often featuring in subscript in for deeper states near to the core for which spin-orbit coupling is important.
As with any angular momentum in quantum mechanics, the projection of along other axes cannot be co-defined with , because they do not commute.
The eigenvectors of , , and parity, which are also eigenvectors of the Hamiltonian, are linear combinations of the eigenvectors of , , and .
Beyond isolated atoms
The angular momentum quantum numbers strictly refer to isolated atoms. However, they have wider uses for atoms in solids, liquids or gases. The quantum number corresponds to specific spherical harmonics and are commonly used to describe features observed in spectroscopic methods such as X-ray photoelectron spectroscopy and electron energy loss spectroscopy. (The notation is slightly different, with X-ray notation where K, L, M are used for excitations out of electron states with .)
The angular momentum quantum numbers are also used when the electron states are described in methods such as Kohn–Sham density functional theory or with gaussian orbitals. For instance, in silicon the electronic properties used in semiconductor device are due to the p-like states with centered at each atom, while many properties of transition metals depend upon the d-like states with .
History
The azimuthal quantum number was carried over from the Bohr model of the atom, and was posited by Arnold Sommerfeld. The Bohr model was derived from spectroscopic analysis of atoms in combination with the Rutherford atomic model. The lowest quantum level was found to have an angular momentum of zero. Orbits with zero angular momentum were considered as oscillating charges in one dimension and so described as "pendulum" orbits, but were not found in nature. In three-dimensions the orbits become spherical without any nodes crossing the nucleus, similar (in the lowest-energy state) to a skipping rope that oscillates in one large circle.
| Physical sciences | Atomic physics | Physics |
532592 | https://en.wikipedia.org/wiki/Radiance | Radiance | In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiation, and to quantify emission of neutrinos and other particles. The SI unit of radiance is the watt per steradian per square metre (). It is a directional quantity: the radiance of a surface depends on the direction from which it is being observed.
The related quantity spectral radiance is the radiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength.
Historically, radiance was called "intensity" and spectral radiance was called "specific intensity". Many fields still use this nomenclature. It is especially dominant in heat transfer, astrophysics and astronomy. "Intensity" has many other meanings in physics, with the most common being power per unit area (so the radiance is the intensity per solid angle in this case).
Description
Radiance is useful because it indicates how much of the power emitted, reflected, transmitted or received by a surface will be received by an optical system looking at that surface from a specified angle of view. In this case, the solid angle of interest is the solid angle subtended by the optical system's entrance pupil. Since the eye is an optical system, radiance and its cousin luminance are good indicators of how bright an object will appear. For this reason, radiance and luminance are both sometimes called "brightness". This usage is now discouraged (see the article Brightness for a discussion). The nonstandard usage of "brightness" for "radiance" persists in some fields, notably laser physics.
The radiance divided by the index of refraction squared is invariant in geometric optics. This means that for an ideal optical system in air, the radiance at the output is the same as the input radiance. This is sometimes called conservation of radiance. For real, passive, optical systems, the output radiance is at most equal to the input, unless the index of refraction changes. As an example, if you form a demagnified image with a lens, the optical power is concentrated into a smaller area, so the irradiance is higher at the image. The light at the image plane, however, fills a larger solid angle so the radiance comes out to be the same assuming there is no loss at the lens.
Spectral radiance expresses radiance as a function of frequency or wavelength. Radiance is the integral of the spectral radiance over all frequencies or wavelengths. For radiation emitted by the surface of an ideal black body at a given temperature, spectral radiance is governed by Planck's law, while the integral of its radiance, over the hemisphere into which its surface radiates, is given by the Stefan–Boltzmann law. Its surface is Lambertian, so that its radiance is uniform with respect to angle of view, and is simply the Stefan–Boltzmann integral divided by π. This factor is obtained from the solid angle 2π steradians of a hemisphere decreased by integration over the cosine of the zenith angle.
Mathematical definitions
Radiance
Radiance of a surface, denoted Le,Ω ("e" for "energetic", to avoid confusion with photometric quantities, and "Ω" to indicate this is a directional quantity), is defined as
where
∂ is the partial derivative symbol;
Φe is the radiant flux emitted, reflected, transmitted or received;
Ω is the solid angle;
A cos θ is the projected area.
In general Le,Ω is a function of viewing direction, depending on θ through cos θ and azimuth angle through . For the special case of a Lambertian surface, is proportional to cos θ, and Le,Ω is isotropic (independent of viewing direction).
When calculating the radiance emitted by a source, A refers to an area on the surface of the source, and Ω to the solid angle into which the light is emitted. When calculating radiance received by a detector, A refers to an area on the surface of the detector and Ω to the solid angle subtended by the source as viewed from that detector. When radiance is conserved, as discussed above, the radiance emitted by a source is the same as that received by a detector observing it.
Spectral radiance
Spectral radiance in frequency of a surface, denoted Le,Ω,ν, is defined as
where ν is the frequency.
Spectral radiance in wavelength of a surface, denoted Le,Ω,λ, is defined as
where λ is the wavelength.
Conservation of basic radiance
Radiance of a surface is related to étendue by
where
n is the refractive index in which that surface is immersed;
G is the étendue of the light beam.
As the light travels through an ideal optical system, both the étendue and the radiant flux are conserved. Therefore, basic radiance defined by
is also conserved. In real systems, the étendue may increase (for example due to scattering) or the radiant flux may decrease (for example due to absorption) and, therefore, basic radiance may decrease. However, étendue may not decrease and radiant flux may not increase and, therefore, basic radiance may not increase.
SI radiometry units
| Physical sciences | Electromagnetic radiation | Physics |
532906 | https://en.wikipedia.org/wiki/Orthomyxoviridae | Orthomyxoviridae | Orthomyxoviridae () is a family of negative-sense RNA viruses. It includes seven genera: Alphainfluenzavirus, Betainfluenzavirus, Gammainfluenzavirus, Deltainfluenzavirus, Isavirus, Thogotovirus, and Quaranjavirus. The first four genera contain viruses that cause influenza in birds (see also avian influenza) and mammals, including humans. Isaviruses infect salmon; the thogotoviruses are arboviruses, infecting vertebrates and invertebrates (such as ticks and mosquitoes). The Quaranjaviruses are also arboviruses, infecting vertebrates (birds) and invertebrates (arthropods).
The four genera of Influenza virus that infect vertebrates, which are identified by antigenic differences in their nucleoprotein and matrix protein, are as follows:
Alphainfluenzavirus infects humans, other mammals, and birds, and causes all flu pandemics
Betainfluenzavirus infects humans and seals
Gammainfluenzavirus infects humans and pigs
Deltainfluenzavirus infects pigs and cattle.
Structure
The influenzavirus virion is pleomorphic; the viral envelope can occur in spherical and filamentous forms. In general, the virus's morphology is ellipsoidal with particles 100–120 nm in diameter, or filamentous with particles 80–100 nm in diameter and up to 20 μm long. There are approximately 500 distinct spike-like surface projections in the envelope each projecting 10–14 nm from the surface with varying surface densities. The major glycoprotein (HA) spike is interposed irregularly by clusters of neuraminidase (NA) spikes, with a ratio of HA to NA of about 10 to 1.
The viral envelope composed of a lipid bilayer membrane in which the glycoprotein spikes are anchored encloses the nucleocapsids; nucleoproteins of different size classes with a loop at each end; the arrangement within the virion is uncertain. The ribonuclear proteins are filamentous and fall in the range of 50–130 nm long and 9–15 nm in diameter with helical symmetry.
Genome
Viruses of the family Orthomyxoviridae contain six to eight segments of linear negative-sense single stranded RNA. They have a total genome length that is 10,000–14,600 nucleotides (nt). The influenza A genome, for instance, has eight pieces of segmented negative-sense RNA (13.5 kilobases total).
The best-characterised of the influenzavirus proteins are hemagglutinin and neuraminidase, two large glycoproteins found on the outside of the viral particles. Hemagglutinin is a lectin that mediates binding of the virus to target cells and entry of the viral genome into the target cell. In contrast, neuraminidase is an enzyme involved in the release of progeny virus from infected cells, by cleaving sugars that bind the mature viral particles. The hemagglutinin (H) and neuraminidase (N) proteins are key targets for antibodies and antiviral drugs, and they are used to classify the different serotypes of influenza A viruses, hence the H and N in H5N1.
The genome sequence has terminal repeated sequences; repeated at both ends. Terminal repeats at the 5′-end 12–13 nucleotides long. Nucleotide sequences of 3′-terminus identical; the same in genera of same family; most on RNA (segments), or on all RNA species. Terminal repeats at the 3′-end 9–11 nucleotides long. Encapsidated nucleic acid is solely genomic. Each virion may contain defective interfering copies. In Influenza A (H1N1) PB1-F2 is produced from an alternative reading frame in PB1. The M and NS genes produce two different genes via alternative splicing.
Replication cycle
Typically, influenza is transmitted from infected mammals through the air by coughs or sneezes, creating aerosols containing the virus, and from infected birds through their droppings. Influenza can also be transmitted by saliva, nasal secretions, feces and blood. Infections occur through contact with these bodily fluids or with contaminated surfaces. Out of a host, flu viruses can remain infectious for about one week at human body temperature, over 30 days at , and indefinitely at very low temperatures (such as lakes in northeast Siberia). They can be inactivated easily by disinfectants and detergents.
The viruses bind to a cell through interactions between its hemagglutinin glycoprotein and sialic acid sugars on the surfaces of epithelial cells in the lung and throat (Stage 1 in infection figure). The cell imports the virus by endocytosis. In the acidic endosome, part of the hemagglutinin protein fuses the viral envelope with the vacuole's membrane, releasing the viral RNA (vRNA) molecules, accessory proteins and RNA-dependent RNA polymerase into the cytoplasm (Stage 2). These proteins and vRNA form a complex that is transported into the cell nucleus, where the RNA-dependent RNA polymerase begins transcribing complementary positive-sense cRNA (Steps 3a and b). The cRNA is either exported into the cytoplasm and translated (step 4), or remains in the nucleus. Newly synthesised viral proteins are either secreted through the Golgi apparatus onto the cell surface (in the case of neuraminidase and hemagglutinin, step 5b) or transported back into the nucleus to bind vRNA and form new viral genome particles (step 5a). Other viral proteins have multiple actions in the host cell, including degrading cellular mRNA and using the released nucleotides for vRNA synthesis and also inhibiting translation of host-cell mRNAs.
Negative-sense vRNAs that form the genomes of future viruses, RNA-dependent RNA transcriptase, and other viral proteins are assembled into a virion. Hemagglutinin and neuraminidase molecules cluster into a bulge in the cell membrane. The vRNA and viral core proteins leave the nucleus and enter this membrane protrusion (step 6). The mature virus buds off from the cell in a sphere of host phospholipid membrane, acquiring hemagglutinin and neuraminidase with this membrane coat (step 7). As before, the viruses adhere to the cell through hemagglutinin; the mature viruses detach once their neuraminidase has cleaved sialic acid residues from the host cell. After the release of new influenza virus, the host cell dies.
Orthomyxoviridae viruses are one of two RNA viruses that replicate in the nucleus (the other being retroviridae). This is because the machinery of orthomyxo viruses cannot make their own mRNAs. They use cellular RNAs as primers for initiating the viral mRNA synthesis in a process known as cap snatching. Once in the nucleus, the RNA Polymerase Protein PB2 finds a cellular pre-mRNA and binds to its 5′ capped end. Then RNA Polymerase PA cleaves off the cellular mRNA near the 5′ end and uses this capped fragment as a primer for transcribing the rest of the viral RNA genome in viral mRNA. This is due to the need of mRNA to have a 5′ cap in order to be recognized by the cell's ribosome for translation.
Since RNA proofreading enzymes are absent, the RNA-dependent RNA transcriptase makes a single nucleotide insertion error roughly every 10 thousand nucleotides, which is the approximate length of the influenza vRNA. Hence, nearly every newly manufactured influenza virus will contain a mutation in its genome. The separation of the genome into eight separate segments of vRNA allows mixing (reassortment) of the genes if more than one variety of influenza virus has infected the same cell (superinfection). The resulting alteration in the genome segments packaged into viral progeny confers new behavior, sometimes the ability to infect new host species or to overcome protective immunity of host populations to its old genome (in which case it is called an antigenic shift).
Classification
In a phylogenetic-based taxonomy, the category RNA virus includes the subcategory negative-sense ssRNA virus, which includes the order Articulavirales, and the family Orthomyxoviridae. The genera-associated species and serotypes of Orthomyxoviridae are shown in the following table.
Types
There are four genera of influenza virus, each containing only a single species, or type. Influenza A and C infect a variety of species (including humans), while influenza B almost exclusively infects humans, and influenza D infects cattle and pigs.
Influenza A
Influenza A viruses are further classified, based on the viral surface proteins hemagglutinin (HA or H) and neuraminidase (NA or N). 18 HA subtypes (or serotypes) and 11 NA subtypes of influenza A virus have been isolated in nature. Among these, the HA subtype 1-16 and NA subtype 1-9 are found in wild waterfowl and shorebirds and the HA subtypes 17-18 and NA subtypes 10-11 have only been isolated from bats.
Further variation exists; thus, specific influenza strain isolates are identified by the Influenza virus nomenclature, specifying virus type, host species (if not human), geographical location where first isolated, laboratory reference, year of isolation, and HA and NA subtype.
Examples of the nomenclature are:
- isolated from a human
- isolated from a pig
The type A influenza viruses are the most virulent human pathogens among the three influenza types and cause the most severe disease. It is thought that all influenza A viruses causing outbreaks or pandemics originate from wild aquatic birds. All influenza A virus pandemics since the 1900s were caused by Avian influenza, through Reassortment with other influenza strains, either those that affect humans (seasonal flu) or those affecting other animals (see 2009 swine flu pandemic). The serotypes that have been confirmed in humans, ordered by the number of confirmed human deaths, are:
H1N1 caused "Spanish flu" in 1918 and "Swine flu" in 2009.
H2N2 caused "Asian Flu".
H3N2 caused "Hong Kong Flu".
H5N1, "avian" or "bird flu".
H7N7 has unusual zoonotic potential.
H1N2 infects pigs and humans.
H9N2, H7N2, H7N3, H10N7.
Influenza B
Influenza B virus is almost exclusively a human pathogen, and is less common than influenza A. The only other animal known to be susceptible to influenza B infection is the seal. This type of influenza mutates at a rate 2–3 times lower than type A and consequently is less genetically diverse, with only one influenza B serotype. As a result of this lack of antigenic diversity, a degree of immunity to influenza B is usually acquired at an early age. However, influenza B mutates enough that lasting immunity is not possible. This reduced rate of antigenic change, combined with its limited host range (inhibiting cross species antigenic shift), ensures that pandemics of influenza B do not occur.
Influenza C
The influenza C virus infects humans and pigs, and can cause severe illness and local epidemics. However, influenza C is less common than the other types and usually causes mild disease in children.
Influenza D
This is a genus that was classified in 2016, the members of which were first isolated in 2011. This genus appears to be most closely related to Influenza C, from which it diverged several hundred years ago. There are at least two extant strains of this genus. The main hosts appear to be cattle, but the virus has been known to infect pigs as well.
Viability and disinfection
Mammalian influenza viruses tend to be labile, but can survive several hours in mucus. Avian influenza virus can survive for 100 days in distilled water at room temperature, and 200 days at . The avian virus is inactivated more quickly in manure, but can survive for up to two weeks in feces on cages. Avian influenza viruses can survive indefinitely when frozen. Influenza viruses are susceptible to bleach, 70% ethanol, aldehydes, oxidizing agents, and quaternary ammonium compounds. They are inactivated by heat of for minimum of 60 minutes, as well as by low pH <2.
Vaccination and prophylaxis
Vaccines and drugs are available for the prophylaxis and treatment of influenza virus infections. Vaccines are composed of either inactivated or live attenuated virions of the H1N1 and H3N2 human influenza A viruses, as well as those of influenza B viruses. Because the antigenicities of the wild viruses evolve, vaccines are reformulated annually by updating the seed strains.
When the antigenicities of the seed strains and wild viruses do not match, vaccines fail to protect the vaccinees. In addition, even when they do match, escape mutants are often generated.
Drugs available for the treatment of influenza include Amantadine and Rimantadine, which inhibit the uncoating of virions by interfering with M2 proton channel, and Oseltamivir (marketed under the brand name Tamiflu), Zanamivir, and Peramivir, which inhibit the release of virions from infected cells by interfering with NA. However, escape mutants are often generated for the former drug and less frequently for the latter drug.
| Biology and health sciences | Specific viruses | Health |
532909 | https://en.wikipedia.org/wiki/Aplysiida | Aplysiida | The order Aplysiida, commonly known as sea hares (Aplysia species and related genera), are medium-sized to very large opisthobranch gastropod molluscs with a soft internal shell made of protein. These are marine gastropod molluscs in the superfamilies Aplysioidea and Akeroidea.
The common name "sea hare" is a direct translation from , as the animal's existence was known in Roman times. The name derives from their rounded shape and from the two long rhinophores that project upward from their heads and that somewhat resemble the ears of a hare.
Taxonomy
Many older textbooks and websites refer to this suborder as "Aplysiida". The original author Paul Henri Fischer described the taxon Aplysiida at unspecified rank above family. In 1925 Johannes Thiele established the taxon Aplysiida as a suborder.
2005 taxonomy
Since the taxon Aplysiida was not based on an existing genus, this name is no longer available according to the rules of the ICZN. Aplysiida has been replaced in the new Taxonomy of the Gastropoda (Bouchet & Rocroi, 2005) by the clade Aplysiomorpha.
The scientific name for the order in which they used to be classified, the Aplysiida, is derived from the Greek for "without a shield" and refers to the lack of the characteristic head shield found in the cephalaspidean opisthobranchs. Many Aplysiidans have only a thin, internal and much-reduced shell with a small mantle cavity; some have no shell at all. All species have a radula and gizzard plates.
2010 taxonomy
Jörger et al. (2010) have moved this taxon (named as Aplysiida) to Euopisthobranchia.
2017 taxonomy
The name "Aplysiomorpha" was preferred by Bouchet and Rocroi (2005) over "Aplysiida Fischer", 1883, but the authors now agree that there is a consistent usage for Aplysiida in the recent literature and that the older name must be preferred.
Description
Sea hares are mostly rather large, bulky creatures when adults. Juveniles are mainly unobserved on the shoreline. The biggest species, Aplysia vaccaria, can reach a length of and a weight of and is arguably the largest gastropod species.
Sea hares have soft bodies with an internal shell, and like all opisthobranch molluscs, they are hermaphroditic. Unlike many other gastropods, they are more or less bilaterally symmetrical in their external appearance. The foot has lateral projections, or "parapodia".
Life habits
Sea hares are herbivorous, and are typically found on seaweed in shallow water. Some young sea hares seemingly are capable of burrowing in soft sediment, leaving only their rhinophores and mantle opening showing. Sea hares have an extremely good sense of smell. They can follow even the faintest scent using their rhinophores, which are extremely sensitive chemoreceptors.
Their color corresponds with the color of the seaweed they eat: red sea hares have been feeding on red seaweed. This camouflages them from predators. When disturbed, a sea hare can release ink from its ink glands, providing a fluid, smoke-like toxic screen, adversely affecting its predators' olfactory senses while acting as a powerful deterrent. The toxic ink may be white, purple, or red, depending on the pigments in their seaweed food source and lightens in color as it spreads, diluted by seawater. Their skin contains a similar toxin that renders sea hares largely inedible to many predators. In addition to the colored ink, sea hares can secrete a clear slime akin to that released defensively by hagfish which physically plugs the olfactory receptors of predators like lobsters.
Some sea hares can employ jet propulsion as a locomotion and others move like stingrays but with greater fluttering fluidity in their jelly-like "wings". In the moving marine environment and without the sophisticated cognitive machinery of the cephalopods, their motion appears to be somewhat erratic, but they do reach their goals, such as the seabed, according to the wave-action, currents, or calmness of their area.
Human use
Sea hares are consumed in several parts of the world. An example may be "酱爆海兔" (jiàng bào hǎi tù), lit. "sauce-fried sea hare", a Chinese dish featuring sea hare and occasionally squid quickly fried in a sauce.
In Hawaii, sea hares, or kualakai, are typically cooked in an imu wrapped in ti leaves.
Aplysia californica is a species of sea hare noteworthy for its use in studies of the neurobiology of learning and memory, due to its unusually large axons. It is especially associated with the work of Nobel Laureate Eric Kandel. Research surrounding the aplysia gill and siphon withdrawal reflex may be of particular interest with respect to this.
Gallery
| Biology and health sciences | Gastropods | Animals |
532941 | https://en.wikipedia.org/wiki/Pinus%20ponderosa | Pinus ponderosa | Pinus ponderosa, commonly known as the ponderosa pine, bull pine, blackjack pine, western yellow-pine, or filipinus pine, is a very large pine tree species of variable habitat native to mountainous regions of western North America. It is the most widely distributed pine species in North America.
Pinus ponderosa grows in various erect forms from British Columbia southward and eastward through 16 western U.S. states and has been introduced in temperate regions of Europe and in New Zealand. It was first documented in modern science in 1826 in eastern Washington near present-day Spokane (of which it is the official city tree). On that occasion, David Douglas misidentified it as Pinus resinosa (red pine). In 1829, Douglas concluded that he had a new pine among his specimens and coined the name Pinus ponderosa for its heavy wood. In 1836, it was formally named and described by Charles Lawson, a Scottish nurseryman. It was adopted as the official state tree of Montana in 1949.
Description
Pinus ponderosa is a large coniferous pine (evergreen) tree. The bark helps distinguish it from other species. Mature to overmature individuals have yellow to orange-red bark in broad to very broad plates with black crevices. Younger trees have blackish-brown bark, referred to as "blackjacks" by early loggers. Ponderosa pine's five subspecies, as classified by some botanists, can be identified by their characteristically bright-green needles (contrasting with blue-green needles that distinguish Jeffrey pine). The Pacific subspecies has the longest——and most flexible needles in plume-like fascicles of three. The Columbia ponderosa pine has long——and relatively flexible needles in fascicles of three. The Rocky Mountains subspecies has shorter——and stout needles growing in scopulate (bushy, tuft-like) fascicles of two or three. The southwestern subspecies has , stout needles in fascicles of three (averaging ). The central High Plains subspecies is characterized by the fewest needles (1.4 per whorl, on average); stout, upright branches at narrow angles from the trunk; and long green needles——extending farthest along the branch, resembling a fox tail. Needles are widest, stoutest, and fewest (averaging ) for the species.
The egg-shaped cones, which are often found in great number under trees, are long. They are purple when first chewed off by squirrels, but become more brown and spherical as they dry. Each scale has a sharp point.
Sources differ on the scent of P. ponderosa. Some state that the bark smells of turpentine, which could reflect the dominance of terpenes (alpha- and beta-pinenes, as well as delta-3-carene). Others state that it has no distinctive scent, while still others state that the bark smells like vanilla if sampled from a furrow. Sources agree that the Jeffrey pine is more strongly scented than the ponderosa pine. When carved into, pitch-filled stumps emit a scent of fresh pitch.
Size
The National Register of Big Trees lists a ponderosa pine that is tall and in circumference. In January 2011, a Pacific ponderosa pine in the Rogue River–Siskiyou National Forest in Oregon was measured with a laser to be high. The measurement was performed by Michael Taylor and Mario Vaden, a professional arborist from Oregon. The tree was climbed on October 13, 2011, by Ascending The Giants (a tree-climbing company in Portland, Oregon) and directly measured with tape-line at high. As of 2015, a Pinus lambertiana specimen was measured at , which surpassed the ponderosa pine previously considered the world's tallest pine tree.
Taxonomy
Modern forestry research has identified five different taxa of P. ponderosa, with differing botanical characters and adaptations to different climatic conditions. Four of these have been termed "geographic races" in forestry literature. Some botanists historically treated some races as distinct species. In modern botanical usage, they best match the rank of subspecies and have been formally published.
Subspecies and varieties
Pinus ponderosa subsp. brachyptera Engelm. – southwestern ponderosa pine
Four corners transition zone, including southern Colorado, southern Utah, northern and central New Mexico and Arizona, westernmost Texas, and a single disjunct population in the far northwestern Oklahoma panhandle. The Gila Wilderness contains one of the world's largest and healthiest forests. Hot with bimodal monsoonal rainfall; wet winters and summers contrast with dry springs and falls; mild winters.
Pinus ponderosa subsp. critchfieldiana Robert Z. Callaham subsp. novo – Pacific ponderosa pine
Western coastal parts of Washington State; Oregon west of the Cascade Range except for the southward-extending Umpqua–Tahoe Transition Zone; California except for both that transition zone and the Transverse-Tehahchapi Mountains Transition zone in southern California and Critchfield's far Southern California Race. Mediterranean hot, dry summers in California; mild wet winters with heavy snow in mountains.
Pinus ponderosa var. pacifica J.R. Haller & Vivrette – Pacific ponderosa pine
on coastal-draining slopes of major mountain ranges in California, and in southwestern Oregon, Washington.
Pinus ponderosa subsp. ponderosa Douglas ex C. Lawson – Columbia ponderosa pine, North plateau ponderosa pine
Southeast British Columbia, eastern Washington State and Oregon east of the Cascade Range, in northeastern California, northwestern Nevada, Idaho and west of the Helena, Montana, transition zone. Cool, relatively moist summers; very cold, snowy winters (except in the very hot and very dry summers of central Oregon, most notably near Bend, which also has very cold and generally dry winters).
Pinus ponderosa subsp. readiana Robert Z. Callaham subsp. novo – central High Plains ponderosa pine
Southern South Dakota and adjacent northern Nebraska and far eastern Colorado, but neither the northern and southern High Plains nor the Black Hills, which are in P. p. scopulorum. Hot, dry, very windy summers; continental cold, wet winters.
Pinus ponderosa var. scopulorum (Engelm. in S.Watson) E. Murray, Kalmia 12:23, 1982 – Rocky Mountains ponderosa pine
East of the Helena, Montana, transition zone, North & South Dakota, but not the central high plains, Wyoming, Nebraska, northern and central Colorado and Utah, and eastern Nevada. Warm, relatively dry summers; very cold, fairly dry winters.
Pinus ponderosa var. washoensis (H. Mason & Stockw.) J.R. Haller & Vivrette – Washoe pine
Predominantly in northeastern California, and into Nevada and Oregon, at , upper mixed-conifer to lower subalpine habitats.
Distributions of the subspecies in the United States are shown in shadow on the map. Distribution of ponderosa pine is from Critchfield and Little. The closely related five-needled Arizona pine (Pinus arizonica) extends southward into Mexico.
Before the distinctions between the North Plateau and Pacific races were fully documented, most botanists assumed that ponderosa pines in both areas were the same. In 1948, when a botanist and a geneticist from California found a distinct tree on Mt. Rose in western Nevada with some marked differences from the ponderosa pine they knew in California, they described it as a new species, Washoe pine Pinus washoensis. Subsequent research determined this to be one of the southernmost outliers of the typical North Plateau race of ponderosa pine. Its current classification is Pinus ponderosa var. washoensis.
An additional variety, tentatively named P. p. var. willamettensis, found in the Willamette Valley in western Oregon, is rare. This is likely just one of the many islands of Pacific subspecies of ponderosa pine occurring in the Willamette Valley and extending north to the southeast end of Puget Sound in Washington.
Distinguishing subspecies
The subspecies of P. ponderosa can be distinguished by measurements along several dimensions:
| Biology and health sciences | Pinaceae | Plants |
533072 | https://en.wikipedia.org/wiki/Staurolite | Staurolite | Staurolite is a reddish brown to black, mostly opaque, nesosilicate mineral with a white streak. It crystallizes in the monoclinic crystal system, has a Mohs hardness of 7 to 7.5 and the chemical formula: Fe2+2Al9O6(SiO4)4(O,OH)2. Magnesium, zinc and manganese substitute in the iron site and trivalent iron can substitute for aluminium.
Properties
Staurolite often occurs twinned in a characteristic cross-shape, called cruciform penetration twinning. In handsamples, macroscopically visible staurolite crystals are of prismatic shape. The mineral often forms porphyroblasts.
In thin sections staurolite is commonly twinned and shows lower first order birefringence similar to quartz, with the twinning displaying optical continuity. It can be identified in metamorphic rocks by its swiss cheese appearance (with poikilitic quartz) and often mantled porphyroblastic character.
Name
The name is derived from the Greek, stauros for cross and lithos for stone in reference to the common twinning.
Occurrence
Staurolite is a regional metamorphic mineral of intermediate to high grade. It occurs with almandine garnet, micas, kyanite; as well as albite, biotite, and sillimanite in gneiss and schist of regional metamorphic rocks.
It is the official state mineral of the U.S. state of Georgia and is also to be found in the Lepontine Alps in Switzerland.
Staurolite is most commonly found in Fannin County, Georgia. It is also found in Fairy Stone State Park in Patrick County, Virginia. The park is named for a local name for staurolite from a legend in the area. Samples are also found in Island Park, Idaho, near Henrys Lake; Taos, New Mexico; near Blanchard Dam in Minnesota; and Selbu, Norway.
Use
Staurolite is one of the index minerals that are used to estimate the temperature, depth, and pressure at which a rock undergoes metamorphism.
| Physical sciences | Silicate minerals | Earth science |
533074 | https://en.wikipedia.org/wiki/Guanaco | Guanaco | The guanaco ( ; Lama guanicoe) is a camelid native to South America, closely related to the llama. Guanacos are one of two wild South American camelids; the other species is the vicuña, which lives at higher elevations.
Etymology
The guanaco gets its name from the Quechua word wanaku. Young guanacos are called chulengos or "guanaquitos".
Characteristics
Guanacos stand between at the shoulder, body length of , and weigh . Their color varies very little (unlike the domestic llama), ranging from a light brown to dark cinnamon and shading to white underneath. Guanacos have grey faces and small, straight ears. The lifespan of a guanaco can be as long as 28 years.
Guanacos are one of the largest terrestrial mammals native to South America today. Other terrestrial mammalian megafauna weighing as much or more than the guanaco include the tapirs, the marsh deer, the white-tailed deer, the spectacled bear, and the jaguar.
Guanacos have thick skin on their necks, a trait also found in their domestic counterparts, the llama, and their relatives, the wild vicuña and domesticated alpaca. This protects their necks from predator attacks. Bolivians use the neck skin of these animals to make shoes, flattening and pounding the skin to be used for the soles. In Chile, hunting is allowed only in Tierra del Fuego, where the only population not classified as endangered in the country resides. Between 2007 and 2012, 13,200 guanacos were legally hunted in Tierra del Fuego.
Diet
Like all camels, Guanacos are herbivores, grazing on grasses, shrubs, herbs, lichens, fungi, cacti, and flowers. The food is swallowed with little chewing and first enters the forestomach to be digested finally after rumination. This process is similar to that of ruminants, to which camels are not zoologically related. The camels' digestive system is likely to have developed independently of ruminants, which is evidenced by the fact that the forestomachs are equipped with glands.
Blood
Guanacos are often found at altitudes up to above sea level, except in Patagonia, where the southerly latitude means ice covers the vegetation at these altitudes. Their blood is rich in red blood cells, enabling them to survive in the low oxygen levels found at these high altitudes. A teaspoon of guanaco blood contains about 68 million red blood cells, four times that of a human.
Guanaco fiber
Guanaco fiber is particularly prized for its soft, warm feel and is found in luxury fabric. In South America, the guanaco's soft wool is valued second only to that of vicuña wool. The pelts, particularly from the calves, are sometimes used as a substitute for red fox pelts, because the texture is difficult to differentiate. Like their domestic descendant, the llama, the guanaco is double-coated with coarse guard hairs and a soft undercoat, the hairs of which are about 16–18 μm in diameter and comparable to cashmere.
Subspecies
Lama guanicoe guanicoe from the north
Lama guanicoe cacsilensis from the south
Population and distribution
Guanacos inhabit the steppes, scrublands and mountainous regions of South America. They are found in the altiplano of Peru, Bolivia and Chile, and in Patagonia, with a small population in Paraguay. In Argentina they are more numerous in Patagonian regions, as well as in places such as Isla Grande de Tierra del Fuego. In these areas, they have more robust populations, since grazing competition from livestock is limited. Guanaco respond to forage availability, occupying zones with low to intermediate food availability in the breeding season and those with the highest availability in the non-breeding season.
Estimates, as of 2016, place their numbers around 1.5 to 2 million animals: 1,225,000–1,890,000 in Argentina, 270,000–299,000 in Chile, 3,000 in Peru, 150–200 in Bolivia and 20–100 in Paraguay. This is only 3–7% of the guanaco population before the arrival of the Spanish conquistadors in South America. A small population introduced by John Hamilton exists on Staats Island in the Falkland Islands (Malvinas), with a population of around 400 as of 2003. In Torres del Paine National Park, the numbers of guanacos increased from 175 in 1975 to 3,000 in 1993.
Guanacos live in herds composed of females, their young, and a dominant male. Bachelor males form separate herds. While reproductive groups tend to remain small, often containing no more than 10 adults, bachelor herds may contain as many as 50 males. They can run at per hour, often over steep and rocky terrain. They are also excellent swimmers. A guanaco's typical lifespan is 20 to 25 years.
In Bolivia, the habitat of Guanacos is found to be threatened by woody plant encroachment.
Atacama Desert
Some guanacos live in the Atacama Desert, where in some areas it has not rained for over 50 years. A mountainous coastline running parallel to the desert enables them to survive in what are called "fog oases" or lomas. Where the cool water touches the hotter land, the air above the desert is cooled, creating a fog and thus water vapor. Winds carry the fog across the desert, where cacti catch the water droplets and lichens that cling to the cacti soak it in like a sponge. Guanacos then eat the cactus flowers and the lichens.
Ecology
The guanaco is a diurnal animal. It lives in small herds consisting of one male and several females with their young. When the male detects danger, he warns the group by bleating. The guanaco can run up to . This speed is important for the survival of guanacos because they cannot easily hide in the open grasslands of the Altiplano.
Natural predators of the guanaco include pumas and the culpeo or Andean fox. Fox predation was unknown until 2007 when predators began to be observed in the Karukinka Reserve in Tierra del Fuego. Scientists attribute this to the unfavourable climatic conditions on the island, which are causing food to become scarce, weakening the animals. The absence of pumas on Tierra del Fuego is also believed to be a factor that allows the fox to occupy their ecological niche. Finally, it is believed that this behaviour is not new, as the fox is nocturnal, which makes any predation challenging to observe. Faced with the threat of the fox, guanacos resort to cooperative strategies to protect their young with a shield formation, a circle around the vulnerable. If they are successful, they chase the fox away, which would be impossible with a puma.
When threatened, the guanaco alerts the rest of the herd with a high-pitched bleating sound, which sounds similar to a short, sharp laugh. The male usually runs behind the herd to defend them. Though typically mild-mannered, guanacos often spit when threatened, and can do so up to a distance of six feet.
Mating season
Mating season occurs between November and February, during which males often fight violently to establish dominance and breeding rights. Eleven-and-a-half months later, a single chulengo is born. Chulengos are able to walk immediately after birth. Male chulengos are chased off from the herd by the dominant male at around one year old.
Conservation
While not considered an endangered species in southern Argentina and Chile, dead guanacos are a common sight throughout this region where they are entangled on fences. Studies have found that annual yearling mortality on fences (5.53%) was higher than adult mortality (0.84%) and was more frequent in ovine (93 cm high) than bovine (113 cm) fences. Most guanacos died entangled by their legs in the highest wire when trying to jump over the fence.
Captivity and domestication
Around 300 guanacos are in U.S. zoos, and around 200 are registered in private herds. Guanacos have long been thought to be the parent species of the domesticated llama, which was confirmed via molecular phylogenetic analysis in 2001, although the analysis also found that domestic llamas had experienced considerable cross-hybridization with alpacas, which are descended from the wild vicuña.
The guanaco was independently domesticated by the Mapuche of Mocha Island in southern Chile, producing the chilihueque, which was bred for its wool and to pull the plough. This animal disappeared in the 17th century when it was replaced by Old World sheep and draft animals.
| Biology and health sciences | Artiodactyla | null |
533150 | https://en.wikipedia.org/wiki/Short%20ton | Short ton | The short ton (abbreviation tn) is a measurement unit equal to . It is commonly used in the United States, where it is known simply as a ton; however, the term is ambiguous, the single word "ton" being variously used for short, long, and metric tons.
The various tons are defined as units of mass. They are sometimes used as units of weight, the force exerted by a mass at standard gravity (e.g., short ton-force). One short ton exerts a weight at one standard gravity of 2,000 pound-force (lbf).
United States
In the United States, a short ton is usually known simply as a "ton", without distinguishing it from the tonne (), known there as the "metric ton", or the long ton also known as the "imperial ton" (). There are, however, some U.S. applications where unspecified tons normally mean long tons (for example, naval ships) or metric tons (world grain production figures).
Both the long and short ton are defined as 20 hundredweights, but a hundredweight is in the US system (short or net hundredweight) and in the imperial system (long or gross hundredweight).
A short ton–force is .
| Physical sciences | Mass and weight | Basics and measurement |
150243 | https://en.wikipedia.org/wiki/Watermill | Watermill | A watermill or water mill is a mill that uses hydropower. It is a structure that uses a water wheel or water turbine to drive a mechanical process such as milling (grinding), rolling, or hammering. Such processes are needed in the production of many material goods, including flour, lumber, paper, textiles, and many metal products. These watermills may comprise gristmills, sawmills, paper mills, textile mills, hammermills, trip hammering mills, rolling mills, and wire drawing mills.
One major way to classify watermills is by wheel orientation (vertical or horizontal), one powered by a vertical waterwheel through a gear mechanism, and the other equipped with a horizontal waterwheel without such a mechanism. The former type can be further subdivided, depending on where the water hits the wheel paddles, into undershot, overshot, breastshot and pitchback (backshot or reverse shot) waterwheel mills. Another way to classify water mills is by an essential trait about their location: tide mills use the movement of the tide; ship mills are water mills onboard (and constituting) a ship.
Watermills impact the river dynamics of the watercourses where they are installed. During the time watermills operate channels tend to sedimentate, particularly backwater. Also in the backwater area, inundation events and sedimentation of adjacent floodplains increase. Over time however these effects are cancelled by river banks becoming higher. Where mills have been removed, river incision increases and channels deepen.
History
There are two basic types of watermills, one powered by a vertical-waterwheel via a gear mechanism, and the other equipped with a horizontal-waterwheel without such a mechanism. The former type can be further divided, depending on where the water hits the wheel paddles, into undershot, overshot, breastshot and reverse shot waterwheel mills.
Western world
Classical antiquity
The Greeks invented the two main components of watermills, the waterwheel and toothed gearing, and used, along with the Romans, undershot, overshot and breastshot waterwheel mills.
The earliest evidence of a water-driven wheel appears in the technical treatises Pneumatica and Parasceuastica of the Greek engineer Philo of Byzantium (ca. 280−220 BC). The British historian of technology M.J.T. Lewis has shown that those portions of Philo of Byzantium's mechanical treatise which describe water wheels and which have been previously regarded as later Arabic interpolations, actually date back to the Greek 3rd century BC original. The sakia gear is, already fully developed, for the first time attested in a 2nd-century BC Hellenistic wall painting in Ptolemaic Egypt.
Lewis assigns the date of the invention of the horizontal-wheeled mill to the Greek colony of Byzantium in the first half of the 3rd century BC, and that of the vertical-wheeled mill to Ptolemaic Alexandria around 240 BC.
The Greek geographer Strabo reports in his Geography a water-powered grain-mill to have existed near the palace of king Mithradates VI Eupator at Cabira, Asia Minor, before 71 BC.
The Roman engineer Vitruvius has the first technical description of a watermill, dated to 40/10 BC; the device is fitted with an undershot wheel and power is transmitted via a gearing mechanism. He also seems to indicate the existence of water-powered kneading machines.
The Greek epigrammatist Antipater of Thessalonica tells of an advanced overshot wheel mill around 20 BC/10 AD. He praised for its use in grinding grain and the reduction of human labour:
The Roman encyclopedist Pliny mentions in his Naturalis Historia of around 70 AD water-powered trip hammers operating in the greater part of Italy. There is evidence of a fulling mill in 73/74 AD in Antioch, Roman Syria.
The 2nd century AD multiple mill complex of Barbegal in southern France has been described as "the greatest known concentration of mechanical power in the ancient world". It featured 16 overshot waterwheels to power an equal number of flour mills. The capacity of the mills has been estimated at 4.5 tons of flour per day, sufficient to supply enough bread for the 12,500 inhabitants occupying the town of Arelate at that time. A similar mill complex existed on the Janiculum hill, whose supply of flour for Rome's population was judged by emperor Aurelian important enough to be included in the Aurelian walls in the late 3rd century.
A breastshot wheel mill dating to the late 2nd century AD was excavated at Les Martres-de-Veyre, France.
The 3rd century AD Hierapolis water-powered stone sawmill is the earliest known machine to incorporate the mechanism of a crank and connecting rod. Further sawmills, also powered by crank and connecting rod mechanisms, are archaeologically attested for the 6th century AD water-powered stone sawmills at Gerasa and Ephesus. Literary references to water-powered marble saws in what is now Germany can be found in Ausonius 4th century AD poem Mosella. They also seem to be indicated about the same time by the Christian saint Gregory of Nyssa from Anatolia, demonstrating a diversified use of water-power in many parts of the Roman Empire.
The earliest turbine mill was found in Chemtou and Testour, Roman North Africa, dating to the late 3rd or early 4th century AD. A possible water-powered furnace has been identified at Marseille, France.
Mills were commonly used for grinding grain into flour (attested by Pliny the Elder), but industrial uses as fulling and sawing marble were also applied.
The Romans used both fixed and floating water wheels and introduced water power to other provinces of the Roman Empire. So-called 'Greek Mills' used water wheels with a horizontal wheel (and vertical shaft). A "Roman Mill" features a vertical wheel (on a horizontal shaft). Greek style mills are the older and simpler of the two designs, but only operate well with high water velocities and with small diameter millstones. Roman style mills are more complicated as they require gears to transmit the power from a shaft with a horizontal axis to one with a vertical axis.
Although to date only a few dozen Roman mills are archaeologically traced, the widespread use of aqueducts in the period suggests that many remain to be discovered. Recent excavations in Roman London, for example, have uncovered what appears to be a tide mill together with a possible sequence of mills worked by an aqueduct running along the side of the River Fleet.
In 537 AD, ship mills were ingeniously used by the East Roman general Belisarius, when the besieging Goths cut off the water supply for those mills. These floating mills had a wheel that was attached to a boat moored in a fast flowing river.
Middle Ages
The surviving evidence for watermills sharply increases with the emergence of documentary genres such as monastic charters, Christian hagiography and Germanic legal codes. These were more inclined to address watermilling, a mostly rural work process, than the ancient urban-centered literary class had been. By Carolingian times, references to watermills had become "innumerable" in Frankish records. The Domesday Book, compiled in 1086, records 5,624 watermills in England alone. Later research estimates a less conservative number of 6,082 that should be considered a minimum as the northern reaches of England were never properly recorded. In 1300, this number had risen to between 10,000 and 15,000. By the early 7th century, watermills were also well established in Ireland. A century later they began to spread across the former Roman Rhine and Danube frontier into the other parts of Germany. Ship mills and tide mills, both of which yet unattested for the ancient period, were introduced in the 6th century.
Tide mills
In recent years, a number of new archaeological finds has consecutively pushed back the date of the earliest tide mills, all of which were discovered on the Irish coast: A 6th century vertical-wheeled tide mill was located at Killoteran near Waterford. A twin flume horizontal-wheeled tide mill dating to c. 630 was excavated on Little Island. Alongside it, another tide mill was found which was powered by a vertical undershot wheel. The Nendrum Monastery mill from 787 was situated on an island in Strangford Lough in Northern Ireland. Its millstones are 830 mm in diameter and the horizontal wheel is estimated to have developed at its peak. Remains of an earlier mill dated at 619 were also found at the site.
Survey of industrial mills
In a 2005 survey the scholar Adam Lucas identified the following first appearances of various industrial mill types in Western Europe. Noticeable is the preeminent role of France in the introduction of new innovative uses of waterpower. However, he has drawn attention to the dearth of studies of the subject in several other countries.
Ancient East Asia
The waterwheel was found in China from 30 AD onwards, when it was used to power trip hammers, the bellows in smelting iron, and in one case, to mechanically rotate an armillary sphere for astronomical observation (see Zhang Heng). Although the British chemist and sinologist Joseph Needham speculates that the water-powered millstone could have existed in Han China by the 1st century AD, there is no sufficient literary evidence for it until the 5th century AD. In 488 AD, the mathematician and engineer Zu Chongzhi had a watermill erected which was inspected by Emperor Wu of Southern Qi (r. 482–493 AD). The engineer Yang Su of the Sui dynasty (581–618 AD) was said to operate hundreds of them by the beginning of the 6th century. A source written in 612 AD mentions Buddhist monks arguing over the revenues gained from watermills. The Tang dynasty (618–907 AD) 'Ordinances of the Department of Waterways' written in 737 AD stated that watermills should not interrupt riverine transport and in some cases were restricted to use in certain seasons of the year. From other Tang-era sources of the 8th century, it is known that these ordinances were taken very seriously, as the government demolished many watermills owned by great families, merchants, and Buddhist abbeys that failed to acknowledge ordinances or meet government regulations. A eunuch serving Emperor Xuanzong of Tang (r. 712–756 AD) owned a watermill by 748 AD which employed five waterwheels that ground 300 bushels of wheat a day. By 610 or 670 AD, the watermill was introduced to Japan via Korean Peninsula. It also became known in Tibet by at least 641 AD.
Ancient India
According to Greek historical tradition, India received water-mills from the Roman Empire in the early 4th century AD when a certain Metrodoros introduced "water-mills and baths, unknown among them [the Brahmans] till then".
Arabic world
Engineers under the Caliphates adopted watermill technology from former provinces of the Byzantine Empire, having been applied for centuries in those provinces prior to the Muslim conquests, including modern-day Syria, Jordan, Israel, Algeria, Tunisia, Morocco, and Spain (see List of ancient watermills).
The industrial uses of watermills in the Islamic world date back to the 7th century, while horizontal-wheeled and vertical-wheeled watermills were both in widespread use by the 9th century. A variety of industrial watermills were used in the Islamic world, including gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial watermills in operation, from al-Andalus and North Africa to the Middle East and Central Asia. Muslim and Middle Eastern Christian engineers also used crankshafts and water turbines, gears in watermills and water-raising machines, and dams as a source of water, used to provide additional power to watermills and water-raising machines. Fulling mills, and steel mills may have spread from Al-Andalus to Christian Spain in the 12th century. Industrial watermills were also employed in large factory complexes built in al-Andalus between the 11th and 13th centuries.
The engineers of the Islamic world used several solutions to achieve the maximum output from a watermill. One solution was to mount them to piers of bridges to take advantage of the increased flow. Another solution was the ship mill, a type of watermill powered by water wheels mounted on the sides of ships moored in midstream. This technique was employed along the Tigris and Euphrates rivers in 10th-century Iraq, where large ship mills made of teak and iron could produce 10 tons of flour from corn every day for the granary in Baghdad.
Persia
More than 300 watermills were at work in Iran till 1960. Now only a few are still working. One of the famous ones is the water mill of Askzar and the water mill of the Yazd city, still producing flour.
Operation
Typically, water is diverted from a river or impoundment or mill pond to a turbine or water wheel, along a channel or pipe (variously known as a flume, head race, mill race, leat, leet, lade (Scots) or penstock). The force of the water's movement drives the blades of a wheel or turbine, which in turn rotates an axle that drives the mill's other machinery. Water leaving the wheel or turbine is drained through a tail race, but this channel may also be the head race of yet another wheel, turbine or mill. The passage of water is controlled by sluice gates that allow maintenance and some measure of flood control; large mill complexes may have dozens of sluices controlling complicated interconnected races that feed multiple buildings and industrial processes.
Watermills can be divided into two kinds, one with a horizontal water wheel on a vertical axle, and the other with a vertical wheel on a horizontal axle. The oldest of these were horizontal mills in which the force of the water, striking a simple paddle wheel set horizontally in line with the flow turned a runner stone balanced on the rynd which is atop a shaft leading directly up from the wheel. The bedstone does not turn. The problem with this type of mill arose from the lack of gearing; the speed of the water directly set the maximum speed of the runner stone which, in turn, set the rate of milling.
Most watermills in Britain and the United States of America had a vertical waterwheel, one of four kinds: undershot, breast-shot, overshot and pitchback wheels. This vertical produced rotary motion around a horizontal axis, which could be used (with cams) to lift hammers in a forge, fulling stocks in a fulling mill and so on.
Milling corn
However, in corn mills rotation about a vertical axis was required to drive its stones. The horizontal rotation was converted into the vertical rotation by means of gearing, which also enabled the runner stones to turn faster than the waterwheel. The usual arrangement in British and American corn mills has been for the waterwheel to turn a horizontal shaft on which is also mounted a large pit wheel. This meshes with the wallower, mounted on a vertical shaft, which turns the (larger) great spur wheel (mounted on the same shaft). This large face wheel, set with pegs, in turn, turned a smaller wheel (such as a lantern gear) known as a stone nut, which was attached to the shaft that drove the runner stone. The number of runner stones that could be turned depended directly upon the supply of water available. As waterwheel technology improved mills became more efficient, and by the 19th century, it was common for the great spur wheel to drive several stone nuts, so that a single water wheel could drive as many as four stones. Each step in the process increased the gear ratio which increased the maximum speed of the runner stone. Adjusting the sluice gate and thus the flow of the water past the main wheel allowed the miller to compensate for seasonal variations in the water supply. Finer speed adjustment was made during the milling process by tentering, that is, adjusting the gap between the stones according to the water flow, the type of grain being milled, and the grade of flour required.
In many mills (including the earliest) the great spur wheel turned only one stone, but there might be several mills under one roof. The earliest illustration of a single waterwheel driving more than one set of stones was drawn by Henry Beighton in 1723 and published in 1744 by J. T. Desaguliers.
Overshot and pitchback mills
The overshot wheel was a later innovation in waterwheels and was around two and a half times more efficient than the undershot. The undershot wheel, in which the main water wheel is simply set into the flow of the mill race, suffers from an inherent inefficiency stemming from the fact that the wheel itself, entering the water behind the main thrust of the flow driving the wheel, followed by the lift of the wheel out of the water ahead of the main thrust, actually impedes its own operation. The overshot wheel solves this problem by bringing the water flow to the top of the wheel. The water fills buckets built into the wheel, rather than the simple paddle wheel design of undershot wheels. As the buckets fill, the weight of the water starts to turn the wheel. The water spills out of the bucket on the down side into a spillway leading back to river. Since the wheel itself is set above the spillway, the water never impedes the speed of the wheel. The impulse of the water on the wheel is also harnessed in addition to the weight of the water once in the buckets. Overshot wheels require the construction of a dam on the river above the mill and a more elaborate millpond, sluice gate, mill race and spillway or tailrace.
An inherent problem in the overshot mill is that it reverses the rotation of the wheel. If a miller wishes to convert a breastshot mill to an overshot wheel all the machinery in the mill has to be rebuilt to take account of the change in rotation. An alternative solution was the pitchback or backshot wheel. A launder was placed at the end of the flume on the headrace, this turned the direction of the water without much loss of energy, and the direction of rotation was maintained. Daniels Mill near Bewdley, Worcestershire is an example of a flour mill that originally used a breastshot wheel, but was converted to use a pitchback wheel. Today it operates as a breastshot mill.
Larger water wheels (usually overshot steel wheels) transmit the power from a toothed annular ring that is mounted near the outer edge of the wheel. This drives the machinery using a spur gear mounted on a shaft rather than taking power from the central axle. However, the basic mode of operation remains the same; gravity drives machinery through the motion of flowing water.
Toward the end of the 19th century, the invention of the Pelton wheel encouraged some mill owners to replace over- and undershot wheels with Pelton wheel turbines driven through penstocks.
Tide mills
A different type of watermill is the tide mill. This mill might be of any kind, undershot, overshot or horizontal but it does not employ a river for its power source. Instead a mole or causeway is built across the mouth of a small bay. At low tide, gates in the mole are opened allowing the bay to fill with the incoming tide. At high tide the gates are closed, trapping the water inside. At a certain point a sluice gate in the mole can be opened allowing the draining water to drive a mill wheel or wheels. This is particularly effective in places where the tidal differential is very great, such as the Bay of Fundy in Canada where the tides can rise fifty feet, or the now derelict village of Tide Mills, East Sussex. The last two examples in the United Kingdom which are restored to working conditions can be visited at Eling, Hampshire and at Woodbridge, Suffolk.
Run of the river schemes do not divert water at all and usually involve undershot wheels the mills are mostly on the banks of sizeable rivers or fast flowing streams. Other watermills were set beneath large bridges where the flow of water between the stanchions was faster. At one point London bridge had so many water wheels beneath it that bargemen complained that passage through the bridge was impaired.
Current status
In 1870 watermills still produced 2/3 of the power available for British grain milling. By the early 20th century, availability of cheap electrical energy made the watermill obsolete in developed countries although some smaller rural mills continued to operate commercially later throughout the century.
A few historic mills such as the Water Mill, Newlin Mill and Yates Mill in the US and The Darley Mill Centre in the UK still operate for demonstration purposes. Small-scale commercial production is carried out in the UK at Daniels Mill, Little Salkeld Mill and Redbournbury Mill. This was boosted to overcome flour shortages during the Covid pandemic.
Some old mills are being upgraded with modern hydropower technology, such as those worked on by the South Somerset Hydropower Group in the UK.
In some developing countries, watermills are still widely used for processing grain. For example, there are thought to be 25,000 operating in Nepal, and 200,000 in India. Many of these are still of the traditional style, but some have been upgraded by replacing wooden parts with better-designed metal ones to improve the efficiency. For example, the Centre for Rural Technology in Nepal upgraded 2,400 mills between 2003 and 2007.
Applications
Bark mills ground bark, from oak or chestnut trees to produce a coarse powder for use in tanneries.
Blade mills were used for sharpening newly made blades.
Blast furnaces, finery forges, and tinplate works were, until the introduction of the steam engine, almost invariably water powered. Furnaces and Forges were sometimes called iron mills.
Bobbin mills made wooden bobbins for the cotton and other textile industries.
Carpet mills for making carpets and rugs were sometimes water-powered.
Cotton mills were driven by water. The power was used to card the raw cotton, and then to drive the spinning mules and ring frames. Steam engines were initially used to increase the water flow to the wheel, then as the Industrial Revolution progressed, to directly drive the shafts.
Fulling or walk mills were used for a finishing process on woollen cloth.
Gristmills, or corn mills, grind grains into flour.
Lead was usually smelted in smeltmills prior to the introduction of the cupola (a reverberatory furnace).
Needle mills for scouring needles during manufacture were mostly water-powered (such as Forge Mill Needle Museum)
Oil mills for crushing oil seeds might be wind or water-powered
Paper mills used water not only for motive power, but also required it in large quantities in the manufacturing process.
Powder mills for making gunpowder - black powder or smokeless powder were usually water-powered.
Rolling mills shaped metal by passing it between rollers.
Sawmills cut timber into lumber.
Slitting mills were used for slitting bars of iron into rods, which were then made into nails.
Spoke mills turned lumber into spokes for carriage wheels.
Stamp mills for crushing ore, usually from non-ferrous mines
Textile mills for spinning yarn or weaving cloth were sometimes water-powered.
| Technology | Energy and fuel | null |
150261 | https://en.wikipedia.org/wiki/Lark | Lark | Larks are passerine birds of the family Alaudidae. Larks have a cosmopolitan distribution with the largest number of species occurring in Africa. Only a single species, the horned lark, occurs in North America, and only Horsfield's bush lark occurs in Australia. Habitats vary widely, but many species live in dry regions. When the word "lark" is used without specification, it often refers to the Eurasian skylark (Alauda arvensis).
Taxonomy and systematics
The family Alaudidae was introduced in 1825 by the Irish zoologist Nicholas Aylward Vigors as a subfamily Alaudina of the finch family Fringillidae. Larks are a well-defined family, partly because of the shape of their . They have multiple scutes on the hind side of their tarsi, rather than the single plate found in most songbirds. They also lack a pessulus, the bony central structure in the syrinx of songbirds. They were long placed at or near the beginning of the songbirds or oscines (now often called Passeri), just after the suboscines and before the swallows, for example in the American Ornithologists' Union's first check-list. Some authorities, such as the British Ornithologists' Union and the Handbook of the Birds of the World, adhere to that placement. However, many other classifications follow the Sibley-Ahlquist taxonomy in placing the larks in a large oscine subgroup Passerida (which excludes crows, shrikes and their allies, vireos, and many groups characteristic of Australia and southeastern Asia). For instance, the American Ornithologists' Union places larks just after the crows, shrikes, and vireos. At a finer level of detail, some now place the larks at the beginning of a superfamily Sylvioidea with the swallows, various "Old World warbler" and "babbler" groups, and others. Molecular phylogenetic studies have shown that within the Sylvioidea the larks form a sister clade to the family Panuridae which contains a single species, the bearded reedling (Panurus biarmicus). The phylogeny of larks (Alaudidae) was reviewed in 2013, leading to the recognition of the arrangement below.
The genus level cladogram shown below is based on a molecular phylogenetic study of the larks by Per Alström and collaborators published in 2023. The subfamilies are those proposed by the authors. For two species the results conflict with the taxonomy published online in July 2023 by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC): the rusty bush lark (Mirafra rufa) and Gillett's lark (Mirafra gilletti) were found to be embedded in the genus Calendulauda. Alström and collaborators proposed that the genus Mirafra should be split into four genera: Mirafra, Plocealauda, Amirafra and Corypha.
Extant genera
The family Alaudidae contains 102 extant species which are divided into 24 genera: For more detail, see list of lark species.
Extinct genera
Genus Eremarida – (Eremarida xerophila)
Description
Larks, or the family Alaudidae, are small- to medium-sized birds, in length and in mass. The smallest larks are likely the Spizocorys species, which can weigh only around in species like the pink-billed lark and the Obbia lark, while the largest lark is the Tibetan lark.
Like many ground birds, most lark species have long hind claws, which are thought to provide stability while standing. Most have streaked brown plumage, some boldly marked with black or white. Their dull appearance camouflages them on the ground, especially when on the nest. They feed on insects and seeds; though adults of most species eat seeds primarily, all species feed their young insects for at least the first week after hatching. Many species dig with their bills to uncover food. Some larks have heavy bills (reaching an extreme in the thick-billed lark) for cracking seeds open, while others have long, down-curved bills, which are especially suitable for digging.
Larks are the only passerines that lose all their feathers in their first moult (in all species whose first moult is known). This may result from the poor quality of the chicks' feathers, which in turn may result from the benefits to the parents of switching the young to a lower-quality diet (seeds), which requires less work from the parents.
In many respects, including long tertial feathers, larks resemble other ground birds such as pipits. However, in larks the tarsus (the lowest leg bone, connected to the toes) has only one set of scales on the rear surface, which is rounded. Pipits and all other songbirds have two plates of scales on the rear surface, which meet at a protruding rear edge.
Calls and song
Larks have more elaborate calls than most birds, and often extravagant songs given in display flight. These melodious sounds (to human ears), combined with a willingness to expand into anthropogenic habitats—as long as these are not too intensively managed—have ensured larks a prominent place in literature and music, especially the Eurasian skylark in northern Europe and the crested lark and calandra lark in southern Europe.
Behaviour
Breeding
Male larks use song flights to defend their breeding territory and attract a mate. Most species build nests on the ground, usually cups of dead grass, but in some species the nests are more complicated and partly domed. A few desert species nest very low in bushes, perhaps so circulating air can cool the nest. Larks' eggs are usually speckled. The size of the clutch is very variable and ranges from the single egg laid by Sclater's lark up to 6–8 eggs laid by the calandra lark and the black lark. Larks incubate for 11 to 16 days.
In culture
Larks as food
Larks, commonly consumed with bones intact, have historically been considered wholesome, delicate, and light game. They can be used in a number of dishes; for example, they can be stewed, broiled, or used as filling in a meat pie. Lark's tongues are reputed to have been particularly highly valued as a delicacy. In modern times, shrinking habitats made lark meat rare and hard to come by, though it can still be found in restaurants in Italy and elsewhere in southern Europe.
Symbolism
The lark in mythology and literature stands for daybreak, as in Chaucer's "The Knight's Tale", "the bisy larke, messager of day", and Shakespeare's Sonnet 29, "the lark at break of day arising / From sullen earth, sings hymns at heaven's gate" (11–12). The lark is also (often simultaneously) associated with "lovers and lovers' observance" (as in Bernart de Ventadorn's Can vei la lauzeta mover) and with "church services". These meanings of daybreak and religious reference can be combined, as in Blake's Visions of the Daughters of Albion, into a "spiritual daybreak" to signify "passage from Earth to Heaven and from Heaven to Earth". With Renaissance painters such as Domenico Ghirlandaio, the lark symbolizes Christ, with reference to John 16:16.
Literature
Percy Bysshe Shelley's famed 1820 poem "To a Skylark" was inspired by the melodious song of a skylark during an evening walk.
English poet George Meredith wrote a poem titled "The Lark Ascending" in 1881.
In Mervyn Peake's Titus Groan, first book of the Gormenghast trilogy, "Swelter approache[s] [Lord Sepulchrave] with a salver of toasted larks" during the reception following newborn Titus's christening.
Canadian poet John McCrae mentions larks in his poem "In Flanders Fields".
Music
English composer Ralph Vaughan Williams wrote a musical setting of George Meredith's poem, completed in 1914. It was composed for violin and piano, and entitled The Lark Ascending - A Romance. The work received its first performance in December 1920. Soon afterwards the composer arranged it for violin and orchestra, in which version it was first performed in June 1921, and this is how the work remains best-known today.
The old Welsh folk song Marwnad yr Ehedydd (The Lark's Elegy) refers to the death of "the Lark", possibly as a coded reference to the Welsh leader Owain Glyndŵr.
The French-Canadian folk song Alouette refers to plucking feathers from a lark.
Pet
Traditionally, larks are kept as pets in China. In Beijing, larks are taught to mimic the voice of other songbirds and animals. It is an old-fashioned habit of the Beijingers to teach their larks 13 kinds of sounds in a strict order (called "the 13 songs of a lark", Chinese: 百灵十三套). The larks that can sing the full 13 sounds in the correct order are highly valued, while any disruption in the songs will decrease their value significantly.
Early awakening
Larks sing early in the day, often before dawn, leading to the expression "up with the lark" for a person who is awake early in the day, and the term lark being applied to someone who habitually rises early in the morning.
| Biology and health sciences | Passerida | null |
150320 | https://en.wikipedia.org/wiki/Tyrian%20purple | Tyrian purple | Tyrian purple ( porphúra; ), also known as royal purple, imperial purple, or imperial dye, is a reddish-purple natural dye. The name Tyrian refers to Tyre, Lebanon, once Phoenicia. It is secreted by several species of predatory sea snails in the family Muricidae, rock snails originally known by the name Murex (Bolinus brandaris, Hexaplex trunculus and Stramonita haemastoma). In ancient times, extracting this dye involved tens of thousands of snails and substantial labour, and as a result, the dye was highly valued. The colored compound is 6,6'-dibromoindigo.
History
Biological pigments were often difficult to acquire, and the details of their production were kept secret by the manufacturers. Tyrian purple is a pigment made from the mucus of several species of Murex snail. Production of Tyrian purple for use as a fabric dye began as early as 1200 BC by the Phoenicians, and was continued by the Greeks and Romans until 1453 AD, with the fall of Constantinople. In the same way as the modern-day Latin alphabet of Phoenician origin, Phoenician purple pigment was spread through the unique Phoenician trading empire. The pigment was expensive and time-consuming to produce, and items colored with it became associated with power and wealth. This popular idea of purple being elite contributes to the modern day widespread belief that purple is a "royal colour". The colour of textiles from this period provides insight into socio-cultural relationships within ancient societies, in addition to providing insights on technological achievements, fashion, social stratification, agriculture and trade connections. Despite their value to archaeological research, textiles are quite rare in the archaeological record. Like any perishable organic material, they are usually subject to rapid decomposition and their preservation over millennia requires exacting conditions to prevent destruction by microorganisms.
Tyrian purple may first have been used by the ancient Phoenicians as early as 1570 BC. It has been suggested that the name Phoenicia itself means 'land of purple'. The dye was greatly prized in antiquity because the colour did not easily fade, but instead became brighter with weathering and sunlight. It came in various shades, the most prized being that of black-tinted clotted blood.
Because it was extremely tedious to make, Tyrian purple was expensive: the 4th century BC historian Theopompus reported, "Purple for dyes fetched its weight in silver at Colophon" in Asia Minor. The expense meant that purple-dyed textiles became status symbols, whose use was restricted by sumptuary laws. The most senior Roman magistrates wore a toga praetexta, a white toga edged in Tyrian purple. The even more sumptuous toga picta, solid Tyrian purple with gold thread edging, was worn by generals celebrating a Roman triumph.
By the fourth century AD, sumptuary laws in Rome had been tightened so much that only the Roman emperor was permitted to wear Tyrian purple. As a result, 'purple' is sometimes used as a metonym for the office (e.g. the phrase 'donned the purple' means 'became emperor'). The production of Tyrian purple was tightly controlled in the succeeding Byzantine Empire and subsidized by the imperial court, which restricted its use for the colouring of imperial silks. Later (9th century), a child born to a reigning emperor was said to be porphyrogenitos, "born in the purple".
Some speculate that the dye extracted from the Bolinus brandaris is known as () in Biblical Hebrew. Another dye extracted from a related sea snail, Hexaplex trunculus, produced a blue colour after light exposure which could be the one known as (), used in garments worn for ritual purposes.
Production from sea snails
The dye substance is a mucous secretion from the hypobranchial gland of one of several species of medium-sized predatory sea snails that are found in the eastern Mediterranean Sea, and off the Atlantic coast of Morocco. These are the marine gastropods Bolinus brandaris the spiny dye-murex (originally known as Murex brandaris Linnaeus, 1758), the banded dye-murex Hexaplex trunculus, the rock-shell Stramonita haemastoma, and less commonly a number of other species such as Bolinus cornutus. The dye is an organic compound of bromine (i.e., an organobromine compound), a class of compounds often found in algae and in some other sea life, but much more rarely found in the biology of land animals. This dye is in contrast to the imitation purple that was commonly produced using cheaper materials than the dyes from the sea snail.
In nature, the snails use the secretion as part of their predatory behavior to sedate prey and as an antimicrobial lining on egg masses. The snail also secretes this substance when it is attacked by predators, or physically antagonized by humans (e.g., poked). Therefore, the dye can be collected either by "milking" the snails, which is more labor-intensive but is a renewable resource, or by collecting and destructively crushing the snails. David Jacoby remarks that "twelve thousand snails of Murex brandaris yield no more than 1.4 g of pure dye, enough to colour only the trim of a single garment." The dye is collected via the snail-harvesting process, involving the extraction of the hypobranchial gland (located under the mollusk's mantle). This requires advanced knowledge of biology. Murex-based dyeing must take place close to the site from which the snails originate, because the freshness of the material has a significant effect on the results, the colors yielded based on the long process of biochemical, enzymatic and photochemical reactions, and requires reduction and oxidation processes that probably took several days.
Many other species worldwide within the family Muricidae, for example Plicopurpura pansa, from the tropical eastern Pacific, and Plicopurpura patula from the Caribbean zone of the western Atlantic, can also produce a similar substance (which turns into an enduring purple dye when exposed to sunlight) and this ability has sometimes also been historically exploited by local inhabitants in the areas where these snails occur. (Some other predatory gastropods, such as some wentletraps in the family Epitoniidae, seem to also produce a similar substance, although this has not been studied or exploited commercially.) The dog whelk Nucella lapillus, from the North Atlantic, can also be used to produce red-purple and violet dyes.
Royal blue
The Phoenicians also made a deep blue-coloured dye, sometimes referred to as royal blue or hyacinth purple, which was made from a closely related species of marine snail.
The Phoenicians established an ancillary production facility on the Iles Purpuraires at Mogador, in Morocco. The sea snail harvested at this western Moroccan dye production facility was Hexaplex trunculus, also known by the older name Murex trunculus.
This second species of dye murex is found today on the Mediterranean and Atlantic coasts of Europe and Africa (Spain, Portugal, Morocco).
Background
The colour-fast (non-fading) dye was an item of luxury trade, prized by Romans, who used it to colour ceremonial robes. Used as a dye, the colour shifts from blue (peak absorption at 590 nm, which is yellow-orange) to reddish-purple (peak absorption at 520 nm, which is green). It is believed that the intensity of the purple hue improved rather than faded as the dyed cloth aged. Vitruvius mentions the production of Tyrian purple from shellfish. In his History of Animals, Aristotle described the shellfish from which Tyrian purple was obtained and the process of extracting the tissue that produced the dye. Pliny the Elder described the production of Tyrian purple in his Natural History:
The most favourable season for taking these [shellfish] is after the rising of the Dog-star, or else before spring; for when they have once discharged their waxy secretion, their juices have no consistency: this, however, is a fact unknown in the dyers' workshops, although it is a point of primary importance. After it is taken, the vein [i.e. hypobranchial gland] is extracted, which we have previously spoken of, to which it is requisite to add salt, a sextarius [about 20 fl. oz.] to every hundred pounds of juice. It is sufficient to leave them to steep for a period of three days, and no more, for the fresher they are, the greater virtue there is in the liquor. It is then set to boil in vessels of tin [or lead], and every hundred amphorae ought to be boiled down to five hundred pounds of dye, by the application of a moderate heat; for which purpose the vessel is placed at the end of a long funnel, which communicates with the furnace; while thus boiling, the liquor is skimmed from time to time, and with it the flesh, which necessarily adheres to the veins. About the tenth day, generally, the whole contents of the cauldron are in a liquefied state, upon which a fleece, from which the grease has been cleansed, is plunged into it by way of making trial; but until such time as the colour is found to satisfy the wishes of those preparing it, the liquor is still kept on the boil. The tint that inclines to red is looked upon as inferior to that which is of a blackish hue. The wool is left to lie in soak for five hours, and then, after carding it, it is thrown in again, until it has fully imbibed the colour.
Archaeological data from Tyre indicate that the snails were collected in large vats and left to decompose. This produced a hideous stench that was actually mentioned by ancient authors. Not much is known about the subsequent steps, and the actual ancient method for mass-producing the two murex dyes has not yet been successfully reconstructed; this special "blackish clotted blood" colour, which was prized above all others, is believed to be achieved by double-dipping the cloth, once in the indigo dye of H. trunculus and once in the purple-red dye of B. brandaris.
The Roman mythographer Julius Pollux, writing in the 2nd century AD, recounts that the purple dye was first discovered by Heracles (Greek counterpart of the titular god of Tyre, Melqart) while being in Tyre to visit his beloved Tyros, or rather, by his dog, whose mouth was stained purple after biting into a snail on the beach. This story was depicted by Peter Paul Rubens in his painting Hercules' Dog Discovers Purple Dye. According to John Malalas, the incident happened during the reign of the legendary King Phoenix of Tyre, the eponymous progenitor of the Phoenicians, and therefore he was the first ruler to wear Tyrian purple and legislate on its use.
Recently, the archaeological discovery of substantial numbers of Murex shells on Crete suggests that the Minoans may have pioneered the extraction of Imperial purple centuries before the Tyrians. Dating from collocated pottery suggests the dye may have been produced during the Middle Minoan period in the 20th–18th century BC. Accumulations of crushed murex shells from a hut at the site of Coppa Nevigata in southern Italy may indicate production of purple dye there from at least the 18th century BC. Additional archaeological evidence can be found from samples originating from excavations at the extensive Iron Age copper smelting site of “Slaves’ Hill” (Site 34), which is tightly dated by radiocarbon to the late 11th–early 10th centuries BC. Findings from this site include evidence of the use of purple dye found in stains used on pot shards. Evidence of the use of dye in pottery are found in most cases on the upper part of ceramic basins, on the inside surface, the areas in which the reduced dye-solution was exposed to air, and underwent oxidation that turned it purple.
The production of Murex purple for the Byzantine court came to an abrupt end with the sack of Constantinople in 1204, the critical episode of the Fourth Crusade. David Jacoby concludes that "no Byzantine emperor nor any Latin ruler in former Byzantine territories could muster the financial resources required for the pursuit of murex purple production. On the other hand, murex fishing and dyeing with genuine purple are attested for Egypt in the tenth to 13th centuries." By contrast, Jacoby finds that there are no mentions of purple fishing or dyeing, nor trade in the colorant in any Western source, even in the Frankish Levant. The European West turned instead to vermilion provided by the insect Kermes vermilio, known as grana, or crimson.
In 1909, Harvard anthropologist Zelia Nuttall compiled an intensive comparative study on the historical production of the purple dye produced from the carnivorous murex snail, source of the royal purple dye valued higher than gold in the ancient Near East and ancient Mexico. Not only did the people of ancient Mexico use the same methods of production as the Phoenicians, they also valued murex-dyed cloth above all others, as it appeared in codices as the attire of nobility. "Nuttall noted that the Mexican murex-dyed cloth bore a "disagreeable ... strong fishy smell, which appears to be as lasting as the color itself." Likewise, the ancient Egyptian Papyrus of Anastasi laments: "The hands of the dyer reek like rotting fish". So pervasive was this stench that the Talmud specifically granted women the right to divorce any husband who became a dyer after marriage.
In 2021, archaeologists found surviving wool fibers dyed with royal purple in the Timna Valley in Israel. The find, which was dated to , constituted the first direct evidence of fabric dyed with the pigment from antiquity.
Murex purple production in North Africa
Murex purple was a very important industry in many Phoenician territories and Carthage was no exception. Traces of this once very lucrative industry are still visible in many Punic sites such as Kerkouane, Zouchis, Djerba and even in Carthage itself. According to Pliny, Meninx (today's Djerba) produced the best purple in Africa which was also ranked second only after Tyre's. It was found also at Essaouira (Morocco). The Royal purple or Imperial purple was probably used until the time of Augustine of Hippo (354–430) and before the demise of the Roman Empire.
Dye chemistry
Variations in colours of "Tyrian purple" from different snails are related to the presence of indigo dye (blue), 6-bromoindigo (purple), and the red 6,6'-dibromoindigo. Additional changes in colour can be induced by debromination from light exposure (as is the case for Tekhelet) or by heat processing. The final shade of purple is decided by chromatogram, which can be identified by high performance liquid chromatography analysis in a single measurement: indigotin (IND) and indirubin (INR). The two are found in plant sources such as woad (Isatis tinctoria L.) and the indigo plant (Indigofera tinctoria L), as well as in several species of shellfish.
In 1998, by means of a lengthy trial and error process, a process for dyeing with Tyrian purple was rediscovered. This finding built on reports from the 15th century to the 18th century and explored the biotechnology process behind woad fermentation. It is hypothesized that an alkaline fermenting vat was necessary. An incomplete ancient recipe for Tyrian purple recorded by Pliny the Elder was also consulted. By altering the percentage of sea salt in the dye vat and adding potash, he was able to successfully dye wool a deep purple colour.
Recent research in organic electronics has shown that Tyrian purple is an ambipolar organic semiconductor. Transistors and circuits based on this material can be produced from sublimed thin-films of the dye. The good semiconducting properties of the dye originate from strong intermolecular hydrogen bonding that reinforces pi stacking necessary for transport.
Modern hue rendering
True Tyrian purple, like most high-chroma pigments, cannot be accurately rendered on a standard RGB computer monitor. Ancient reports are also not entirely consistent, but these swatches give a rough indication of the likely range in which it appeared:
_
_
The lower one is the sRGB colour #990024, intended for viewing on an output device with a gamma of 2.2. It is a representation of RHS colour code 66A, which has been equated to "Tyrian red", a term which is often used as a synonym for Tyrian purple.
Philately
The colour name "Tyrian plum" is popularly given to a British postage stamp that was prepared, but never released to the public, shortly before the death of King Edward VII in 1910.
Gallery
Explanatory notes
| Physical sciences | Colors | Physics |
150389 | https://en.wikipedia.org/wiki/Nipple | Nipple | The nipple is a raised region of tissue on the surface of the breast from which, in lactating females, milk from the mammary gland leaves the body through the lactiferous ducts to nurse an infant. The milk can flow through the nipple passively, or it can be ejected by smooth muscle contractions that occur along with the ductal system. The nipple is surrounded by the areola, which is often a darker colour than the surrounding skin.
Male mammals also have nipples but without the same level of function or prominence. A nipple is often called a teat when referring to non-humans. "Nipple" or "teat" can also be used to describe the flexible mouthpiece of a baby bottle.
In humans, the nipples of both males and females can be sexually stimulated as part of sexual arousal. In many cultures, female nipples are sexualized, or regarded as sex objects and evaluated in terms of their physical characteristics and sex appeal. Some cultures have little to no sexualization of the nipple, and going topless presents no barrier.
Etymology
The word "nipple" most likely originates as a diminutive of neb, an Old English word meaning "beak", "nose", or "face", and which is of Germanic origin. The words "teat" and "tit" share a Germanic ancestor. The second of the two, tit, was inherited directly from Proto-Germanic, while the first entered English via Old French.
Structure
In mammals, a nipple (also called mammary papilla or teat) is a small projection of skin containing the outlets for 15–20 lactiferous ducts arranged cylindrically around the tip. Marsupials and eutherian mammals typically have an even number of nipples arranged bilaterally, from as few as 2 to as many as 19.
The skin of the nipple is rich in a supply of special nerves that are sensitive to certain stimuli: these are slowly-adapting and rapidly-adapting cutaneous mechanoreceptors. Mechanoreceptors are identified respectively by Type I slowly-adapting with multiple Merkel corpuscle end-organs and Type II slowly-adapting with single Ruffini corpuscle end-organs, as well as Type I rapidly-adapting with multiple Meissner corpuscle end-organs and Type II rapidly-adapting with single Pacinian corpuscle end-organs. The dominant nerve supply to the nipple comes from the lateral cutaneous branches of fourth intercostal nerve. The nipple is also used as an anatomical landmark. It marks the T4 (fourth thoracic vertebra) dermatome and rests over the approximate level of the diaphragm.
The arterial supply to the nipple and breast originates from the anterior intercostal branches of the internal thoracic (mammary) arteries; lateral thoracic artery; and thoracodorsal arteries. The venous vessels parallel the arteries. The lymphatic ducts that drain the nipple are the same for the breast. The axillary nodes are the apical axillary nodes, the lateral group and the anterior group. 75% of the lymph is drained through the axillary lymph nodes located near the armpit. The rest of the drainage leaves the nipple and breast through infroclavicular, pectoral, or parasternal nodes.
Since nipples change throughout the life span in men and women, the anatomy of the nipple can change and this change may be expected and considered normal.
In male mammals
Almost all mammals have nipples. Why males have nipples has been the subject of scientific research. Differences among the sexes (called sexual dimorphism) within a given species are considered by evolutionary biologists to be mostly the result of sexual selection, directly or indirectly. There is a consensus that the male nipple exists because there is no particular advantage to males losing the trait. In consequence, some biologists would call the male nipple a spandrel.
In humans, the nipples are often surrounded by body hair.
Function
The physiological purpose of nipples is to deliver milk, produced in the female mammary glands during lactation, to an infant. During breastfeeding, nipple stimulation by an infant will stimulate the release of oxytocin from the hypothalamus. Oxytocin is a hormone that increases during pregnancy and acts on the breast to help produce the milk-ejection reflex. Oxytocin release from the nipple stimulation of the infant causes the uterus to contract even after childbirth. The strong uterine contractions that are caused by the stimulation of the mother's nipples help the uterus contract to clamp down the uterine arteries. These contractions are necessary to prevent post-partum haemorrhage.
When the infant suckles or stimulates the nipple, oxytocin levels rise and small muscles in the breast contract, moving the milk through the milk ducts. The result of nipple stimulation by the infant helps to move breast milk out through the ducts and to the nipple. This contraction of milk is called the "let-down reflex". Latching on refers to the infant fastening onto the nipple to breastfeed. A good attachment is when the bottom of the areola (the area around the nipple) is in the infant's mouth and the nipple is drawn back inside his or her mouth. A poor latch results in insufficient nipple stimulation to create the let down reflex. The nipple is poorly stimulated when the baby latches on too close to the tip of the nipple. This poor attachment can cause sore and cracked nipples and a reluctance of the mother to continue to breastfeed. After birth, the milk supply increases based upon the continuous and increasing stimulation of the nipple by the infant. If the baby increases nursing time at the nipple, the mammary glands respond to this stimulation by increasing milk production.
Clinical significance
Pain
Nipple pain can be a disincentive for breastfeeding. Sore nipples that progress to cracked nipples is of concern since many women cease breastfeeding due to the pain. In some instances, an ulcer will form on the nipple. One reason for the development of cracked and sore nipples is the incorrect latching-on of the infant to the nipple. If a nipple appears to be wedge-shaped, white and flattened, this may indicate that the attachment of the infant is not good and there is a potential of developing cracked nipples. Herpes infection of the nipple is painful. Nipple pain can also be caused by excessive friction of clothing against the nipple that causes a fissure.
Discharge
Nipple discharge refers to any fluid that seeps out of the nipple of the breast. Discharge from the nipple does not occur in lactating women. And discharge in non-pregnant women or women who are not breastfeeding may not cause concern. Men that have discharge from their nipples are not typical. Discharge from the nipples of men or boys may indicate a problem. Discharge from the nipples can appear without squeezing or may only be noticeable if the nipples are squeezed. One nipple can have discharge while the other does not. The discharge can be clear, green, bloody, brown or straw-coloured. The consistency can be thick, thin, sticky or watery.
Some cases of nipple discharge will clear on their own without treatment. Nipple discharge is most often not cancer (benign), but rarely, it can be a sign of breast cancer. It is important to determine what is causing the discharge and to get treatment. Reasons for nipple discharge include:
Pregnancy
Recent breastfeeding
Rubbing on the area from a bra or T-shirt
Injury to the breast
Infection
Inflammation and clogging of the breast ducts
Noncancerous pituitary tumors
Small growth in the breast (usually not cancer)
Severe underactive thyroid gland (hypothyroidism)
Fibrocystic breast (normal lumpiness in the breast)
Use of certain medicines
Use of certain herbs, such as anise and fennel
Widening of the milk ducts
Sometimes, babies can have nipple discharge. This is caused by hormones from the mother before birth. It usually goes away in two weeks. Cancers such as Paget's disease (a rare type of cancer involving the skin of the nipple) can also cause nipple discharge.
Nipple discharge that is not normal is bloody, comes from only one nipple, or comes out on its own without squeezing or touching the nipple. Nipple discharge is more likely to be normal if it comes out of both nipples or happens when the nipples are squeezed. Squeezing the nipple to check for discharge can make it worse. Leaving the nipple alone may make the discharge stop.
Nipple discharge in a male is usually of more concern. Most of the time a mammogram and an examination of the fluid is done. A biopsy is often performed. A fine needle aspiration (FNA) biopsy can be fast and least painful. A very thin, hollow needle and slight suction will be used to remove a small sample from under the nipple. Using a local anesthetic to numb the skin may not be necessary since a thin needle is used for the biopsy. Receiving an injection to prevent pain from the biopsy may be more painful than the biopsy itself.
Some men develop a condition known as gynecomastia, in which the breast tissue under the nipple develops and grows. Discharge from the nipple can occur. The nipple may swell in some men possibly due to increased levels of estrogen.
Appearance
Changes in appearance may be normal or related to disease.
Inverted nipples – This is normal if the nipples have always been indented inward and can easily point out when touched. If the nipples are pointing in and this is new, this is an unexpected change.
Skin puckering of the nipple – This can be caused by scar tissue from surgery or an infection. Often, scar tissue forms for no reason. Most of the time this issue does not need treatment. This is an unexpected change. This change can be of concern since puckering or retraction of the nipple can indicate an underlying change in breast tissue that may be cancerous.
The nipple is warm to the touch, red or painful – This can be an infection. It is rarely due to breast cancer.
Scaly, flaking, or itchy nipple – This is most often due to eczema or a bacterial or fungal infection. This change is not expected. Flaking, scaly, or itchy nipples can be a sign of Paget's disease.
Thickened skin with large pores – This is called peau d'orange because the skin looks like an orange peel. An infection in the breast or inflammatory breast cancer can cause this problem. This is not an expected change.
Retracted nipples – The nipple was raised above the surface but changes, begins to pull inward, and does not come out when stimulated.
The average projection and size of human female nipples is slightly more than .
Breast cancer
Symptoms of breast cancer can often be seen first by changes of the nipple and areola, although not all women have the same symptoms, and some people do not have any signs or symptoms at all. A person may find out they have breast cancer after a routine mammogram.
Warning signs can include:
New lump in the nipple, or breast or armpit
Thickening or swelling of part of the breast, areola, or nipple
Irritation or dimpling of breast skin
Redness or flaky skin in the nipple area or the breast
Pulling in of the nipple or pain in the nipple area
Nipple discharge other than breast milk, including blood
Any change in the size or the shape of the breast or nipple
Pain in any area of the breast
Changes in the nipple are not necessarily symptoms or signs of breast cancer. Other conditions of the nipple can mimic the signs and symptoms of breast cancer.
Vertical transmission
Some infections are transmitted through the nipple, especially if irritation or injury to the nipple has occurred. In these circumstances, the nipple itself can become infected with Candida that is present in the mouth of the breastfeeding infant. The infant will transmit the infection to the mother. Most of the time, this infection is localized to the area of the nipple. In some cases, the infection can progress to become a full-blown case of mastitis or breast infection. In some cases, if the mother has an infection with no nipple cracks or ulcerations, it is still safe to breastfeed the infant.
Herpes infection of the nipple can go unnoticed because the lesions are small but usually are quite painful. Herpes in the newborn is a serious and sometimes fatal infection. Transmission of Hepatitis C and B to the infant can occur if the nipples are cracked.
Other infections can be transmitted through a break of the skin of the nipple and can infect the infant.
Other disorders
Nipple bleb
Candida infection of the nipple
Eczema of the nipple
Inverted nipple
Staphylococcus infection of the nipple
Edematous areola
Herpes infection of the nipple
Reynaud phenomenon of the nipple
Flat nipple
Surgery
A nipple-sparing/subcutaneous mastectomy is a surgical procedure where breast tissue is removed, but the nipple and areola are preserved. This procedure was historically done only prophylactically or with mastectomy for the benign disease over the fear of increased cancer development in retained areolar ductal tissue. Recent series suggest that it may be an oncologically sound procedure for tumours not in the subareolar position.
Society and culture
Exposure
The cultural tendency to hide the female nipple under clothing has existed in Western culture since the 1800s. As female nipples are often perceived an intimate part, covering them might have originated under Victorian morality as with riding side saddle. Exposing the entire breast and nipple is a form of protest for some and a crime for others. The exposure of nipples is usually considered immodest and in some instances is viewed as lewd or indecent behavior.
A case in Erie, Pennsylvania, concerning the exposure of breasts and nipple proceeded to the United States Supreme Court. The Erie ordinance was regulating the nipple in public as an act that is committed when a person "knowingly or intentionally, ... appears in a state of nudity commits Public Indecency." Later in the statute, nudity is further described as an uncovered female nipple. But nipple exposure of a man was not regulated. An opinion column credited to Cecil Adams noted: "Ponder the significance of that. A man walks around bare-chested and the worst that happens is he won't get served in restaurants. But a woman who goes topless is legally in the same boat as if she'd had sex in public. That may seem crazy, but in the US it's a permissible law."
The legality around the exposure of nipples is inconsistently regulated throughout the US. Some states do not allow the visualization of any part of the breast. Other jurisdictions prohibit any female chest anatomy by banning anatomical structures that lie below the top of the areola or nipple. Such is the case in West Virginia and Massachusetts. West Virginia's regulation is very specific and is not likely to be misinterpreted, stating: "[The] display of 'any portion of the cleavage of the human female breast exhibited by a dress, blouse, skirt, leotard, bathing suit, or other wearing apparel [is permitted] provided the areola is not exposed, in whole or in part.
The Instagram social media site has a "no nipples" policy with exceptions: material that is not allowed includes "some photos of female nipples, but photos of post-mastectomy scarring and women actively breastfeeding are allowed. Nudity in photos of paintings and sculptures is OK, too". Previously, Instagram had removed images of nursing mothers. Instagram removed images of Rihanna and had her account cancelled in 2014 when she posted selfies with nipples. This was incentive for the Twitter campaign #FreeTheNipple. In 2016, an Instagram page invited users to post images of nipples from both sexes; @genderless_nipples, which displays close ups of both the nipples of men and women for the purpose of spotlighting what may be inconsistency. Some contributors have circumvented the policy. Facebook has also been struggling to define its nipple policy.
Filmmaker Lina Esco made a film entitled Free the Nipple, which is about "laws against female toplessness or restrictions on images of female, but not male, nipples", which Esco states is an example of sexism in society.
Sexuality
Nipples can be sensitive to touch, and nipple stimulation can incite sexual arousal. Few women report experiencing orgasm from nipple stimulation. Before Komisaruk et al.'s functional magnetic resonance (fMRI) research on nipple stimulation in 2011, reports of women achieving orgasm from nipple stimulation relied solely on anecdotal evidence. Komisaruk's study was the first to map the female genitals onto the sensory portion of the brain; it indicates that sensation from the nipples travels to the same part of the brain as sensations from the vagina, clitoris and cervix, and that these reported orgasms are genital orgasms caused by nipple stimulation, and may be directly linked to the genital sensory cortex ("the genital area of the brain").
Piercings
In business
Some companies and non-profit organisations have used the word nipple or images of nipples to draw attention to their product or cause.
| Biology and health sciences | Integumentary system | Biology |
150421 | https://en.wikipedia.org/wiki/Phalarope | Phalarope |
A phalarope is any of three living species of slender-necked shorebirds in the genus Phalaropus of the bird family Scolopacidae.
Phalaropes are close relatives of the shanks and tattlers, the Actitis and Terek sandpipers, and also of the turnstones and calidrids. They are especially notable for their unusual nesting behavior and their unique feeding technique.
Two species, the red or grey phalarope (P. fulicarius) and the red-necked phalarope (P. lobatus) breed around the Arctic Circle and winter on tropical oceans. Wilson's phalarope (P. tricolor) breeds in western North America and migrates to South America. All are in length, with lobed toes and a straight, slender bill. Predominantly grey and white in winter, their plumage develops reddish markings in summer.
Taxonomy
The genus Phalaropus was introduced by French zoologist Mathurin Jacques Brisson in 1760 with the red phalarope (Phalaropus fulicarius) as the type species. The English and genus names come through French phalarope and scientific Latin Phalaropus from Ancient Greek phalaris, "coot", and pous, "foot". Coots and phalaropes both have lobed toes.
The genus contains three species:
A fossil species, P. elenorae, is known from the Middle Pliocene 4–3 million years ago (Mya). A coracoid fragment from the Late Oligocene (23 Mya) near Créchy, France, was also ascribed to a primitive phalarope; it might belong to an early species of the present genus or a prehistoric relative. The divergence of phalaropes from their closest relatives can be dated to around that time, as evidenced by the fossil record (chiefly of the shanks) and supported by tentative DNA sequence data. Of note, the last remains of the Turgai Sea disappeared around then, and given the distribution of their fossil species, this process probably played a major role in separating the lineages of the shank-phalarope clade.
Ecology and behavior
Red and red-necked phalaropes are unusual amongst shorebirds in that they are considered pelagic, that is, they spend a great deal of their lives outside the breeding season well out to sea. Phalaropes are unusually halophilic (salt-loving) and feed in great numbers in saline lakes such as Mono Lake in California and the Great Salt Lake of Utah.
Feeding
When feeding, a phalarope often swims in a small, rapid circle, forming a small whirlpool. This behavior is thought to aid feeding by raising food from the bottom of shallow water. The bird then reaches into the center of the vortex with its bill, plucking small insects or crustaceans caught up therein. Phalaropes use the surface tension of water to capture food particles and get them to move up along their bills and into their mouths in what has been termed as a capillary ratchet.
Sexual dimorphism and reproduction
In the three phalarope species, sexual dimorphism and contributions to parenting are reversed from what is normally seen in birds. Females are larger and more brightly colored than males. The females pursue and fight over males, then defend them from other females until the male begins incubation of the clutch. Males perform all incubation and chick care, while the female attempts to find another male to mate with. If a male loses his eggs to predation, he often rejoins his original mate or a new female, which then lays another clutch. When the season is too late to start new nests, females begin their southward migration, leaving the males to incubate the eggs and care for the young. Phalaropes are uncommon among birds and vertebrates in general in that they engage in polyandry, with one female taking multiple male mates, while males mate with only one female. Specifically, phalaropes engage in serial polyandry, wherein females pair with multiple males at different times in the breeding season.
| Biology and health sciences | Charadriiformes | Animals |
150423 | https://en.wikipedia.org/wiki/Bifocals | Bifocals | Bifocals are eyeglasses with two distinct optical powers correcting vision at both long and short distances. Bifocals are commonly prescribed to people with presbyopia who also require a correction for myopia, hyperopia, and/or astigmatism.
History
Benjamin Franklin is generally credited with the invention of bifocals. He decided to saw his lenses in half so he could read the lips of speakers of French at court, the only way he could understand them. Historians have produced some evidence to suggest that others may have come before him in the invention; however, a correspondence between George Whatley and John Fenno, editor of the Gazette of the United States, suggested that Franklin had indeed invented bifocals, and perhaps 50 years earlier than had been originally thought. On the contrary, the College of Optometrists concluded:
Unless further evidence emerges all we can say for certain is that Franklin was one of the first people to wear split bifocals and this act of wearing them caused his name to be associated with the type from an early date. This no doubt contributed greatly to their popularisation. The evidence implies, however, that when he sought to order lenses of this type the London opticians were already familiar with them. Other members of Franklin's circle of British friends may have worn them even earlier, from the 1760s, but it is at best uncertain (and arguably improbable?) that split bifocal lenses had a famous gentleman inventor. Since many inventions are developed independently by more than one person, it is possible that the invention of bifocals may have been such a case.
John Isaac Hawkins, the inventor of trifocal lenses, coined the term bifocals in 1824 and credited Benjamin Franklin.
In 1955, Irving Rips of Younger Optics created the first seamless or "invisible" bifocal, a precursor to progressive lenses. This followed Howard D. Beach's 1946 work in "blended lenses", O'Conner's "Ultex" lens in 1910, and Isaac Schnaitmann's single-piece bifocal lens in 1837.
Construction
Original bifocals were designed with the most convex lenses (for close viewing) in the lower half of the frame and the least convex lenses on the upper. Up until the beginning of the 20th century two separate lenses were cut in half and combined in the rim of the frame. The mounting of two half-lenses into a single frame led to a number of early complications and rendered such spectacles quite fragile. A method for fusing the sections of the lenses together was developed by Louis de Wecker at the end of the 19th century and patented by John Louis Borsch Jr. (1873–1929) in 1908. In 1915, Henri (Henry) A. Courmettes (1884-1969), a French immigrant to the US, patented the “Flat Top” (or “D Segment”) reading portion of the bifocal. The advantages were wide reading area, less prismatic effects and no image jump between distance and close viewing. This was first introduced in mass production by the Univis Lens Co. of Dayton, OH. in 1926. In 1935, Courmettes went on to patent the Tilted Bifocal Lens, in 1936, a method of grinding two prescriptions simultaneously on that Tilted Bifocal Lens, and in 1951, the Cataract Bifocal Lens.
Today most bifocals are created by moulding a reading segment into a primary lens and are available with the reading segments in a variety of shapes and sizes.
Problems
Bifocals can contribute to falls, cause headaches, and even dizziness for some wearers. Adaptation to the small field of view offered by the reading segment of bifocals can take some time, as the user learns to move either the head or the reading material rather than the eyes. Computer monitors are generally placed directly in front of users and can lead to muscle fatigue due to the unusual straight and constant movement of the head. This trouble is mitigated by the use of monofocal lenses for computer use.
Future
Research continues in an attempt to eliminate the limited field of vision in current bifocals. New materials and technologies may provide a method which can selectively adjust the optical power of a lens. Researchers have constructed such a lens using a liquid crystal layer applied between two glass substrates.
Bifocals in the animal world
The aquatic larval stage of the diving beetle Thermonectus marmoratus has, in its principal eyes, two retinas and two distinct focal planes that are substantially separated (in the manner of bifocals) to switch their vision from up-close to distance, for easy and efficient capture of their prey, mostly mosquito larvae.
| Technology | Optical instruments | null |
151040 | https://en.wikipedia.org/wiki/CPT%20symmetry | CPT symmetry | Charge, parity, and time reversal symmetry is a fundamental symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity transformation (P), and time reversal (T). CPT is the only combination of C, P, and T that is observed to be an exact symmetry of nature at the fundamental level. The CPT theorem says that CPT symmetry holds for all physical phenomena, or more precisely, that any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry. In layman terms, this stipulates that an antimatter, mirrored, and time reversed universe would behave exactly the same as our regular universe.
History
The CPT theorem appeared for the first time, implicitly, in the work of Julian Schwinger in 1951 to prove the connection between spin and statistics. In 1954, Gerhart Lüders and Wolfgang Pauli derived more explicit proofs, so this theorem is sometimes known as the Lüders–Pauli theorem. At about the same time, and independently, this theorem was also proved by John Stewart Bell. These proofs are based on the principle of Lorentz invariance and the principle of locality in the interaction of quantum fields. Subsequently, Res Jost gave a more general proof in 1958 using the framework of axiomatic quantum field theory.
Efforts during the late 1950s revealed the violation of P-symmetry by phenomena that involve the weak force, and there were well-known violations of C-symmetry as well. For a short time, the CP-symmetry was believed to be preserved by all physical phenomena, but in the 1960s that was later found to be false too, which implied, by CPT invariance, violations of T-symmetry as well.
Derivation of the CPT theorem
Consider a Lorentz boost in a fixed direction z. This can be interpreted as a rotation of the time axis into the z axis, with an imaginary rotation parameter. If this rotation parameter were real, it would be possible for a 180° rotation to reverse the direction of time and of z. Reversing the direction of one axis is a reflection of space in any number of dimensions. If space has 3 dimensions, it is equivalent to reflecting all the coordinates, because an additional rotation of 180° in the x-y plane could be included.
This defines a CPT transformation if we adopt the Feynman–Stueckelberg interpretation of antiparticles as the corresponding particles traveling backwards in time. This interpretation requires a slight analytic continuation, which is well-defined only under the following assumptions:
The theory is Lorentz invariant;
The vacuum is Lorentz invariant;
The energy is bounded below.
When the above hold, quantum theory can be extended to a Euclidean theory, defined by translating all the operators to imaginary time using the Hamiltonian. The commutation relations of the Hamiltonian, and the Lorentz generators, guarantee that Lorentz invariance implies rotational invariance, so that any state can be rotated by 180 degrees.
Since a sequence of two CPT reflections is equivalent to a 360-degree rotation, fermions change by a sign under two CPT reflections, while bosons do not. This fact can be used to prove the spin-statistics theorem.
Consequences and implications
The implication of CPT symmetry is that a "mirror-image" of our universe — with all objects having their positions reflected through an arbitrary point (corresponding to a parity inversion), all momenta reversed (corresponding to a time inversion) and with all matter replaced by antimatter (corresponding to a charge inversion) — would evolve under exactly our physical laws. The CPT transformation turns our universe into its "mirror image" and vice versa. CPT symmetry is recognized to be a fundamental property of physical laws.
In order to preserve this symmetry, every violation of the combined symmetry of two of its components (such as CP) must have a corresponding violation in the third component (such as T); in fact, mathematically, these are the same thing. Thus violations in T-symmetry are often referred to as CP violations.
The CPT theorem can be generalized to take into account pin groups.
In 2002 Oscar Greenberg proved that, with reasonable assumptions, CPT violation implies the breaking of Lorentz symmetry.
CPT violations would be expected by some string theory models, as well as by some other models that lie outside point-particle quantum field theory. Some proposed violations of Lorentz invariance, such as a compact dimension of cosmological size, could also lead to CPT violation. Non-unitary theories, such as proposals where black holes violate unitarity, could also violate CPT. As a technical point, fields with infinite spin could violate CPT symmetry.
The overwhelming majority of experimental searches for Lorentz violation have yielded negative results. A detailed tabulation of these results was given in 2011 by Kostelecky and Russell.
| Physical sciences | Particle physics: General | Physics |
151066 | https://en.wikipedia.org/wiki/Classical%20physics | Classical physics | Classical physics is a group of physics theories that predate modern, more complete, or more widely applicable theories. If a currently accepted theory is considered to be modern, and its introduction represented a major paradigm shift, then the previous theories, or new theories based on the older paradigm, will often be referred to as belonging to the area of "classical physics".
As such, the definition of a classical theory depends on context. Classical physical concepts are often used when modern theories are unnecessarily complex for a particular situation. Most often, classical physics refers to pre-1900 physics, while modern physics refers to post-1900 physics, which incorporates elements of quantum mechanics and relativity.
Overview
Classical theory has at least two distinct meanings in physics. In the context of quantum mechanics, classical theory refers to theories of physics that do not use the quantisation paradigm, which includes classical mechanics and relativity. Likewise, classical field theories, such as general relativity and classical electromagnetism, are those that do not use quantum mechanics. In the context of general and special relativity, classical theories are those that obey Galilean relativity.
Depending on point of view, among the branches of theory sometimes included in classical physics are variably:
Classical mechanics
Newton's laws of motion
Classical Lagrangian and Hamiltonian formalisms
Classical electrodynamics (Maxwell's equations)
Classical thermodynamics
Classical chaos theory and nonlinear dynamics
Comparison with modern physics
In contrast to classical physics, "modern physics" is a slightly looser term that may refer to just quantum physics or to 20th- and 21st-century physics in general. Modern physics includes quantum theory and relativity, when applicable.
A physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid.
In practice, physical objects ranging from those larger than atoms and molecules, to objects in the macroscopic and astronomical realm, can be well-described (understood) with classical mechanics. Beginning at the atomic level and lower, the laws of classical physics break down and generally do not provide a correct description of nature. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales and field strengths large enough that quantum mechanical effects are negligible. Unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist.
From the point of view of classical physics as being non-relativistic physics, the predictions of general and special relativity are significantly different from those of classical theories, particularly concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Traditionally, light was reconciled with classical mechanics by assuming the existence of a stationary medium through which light propagated, the luminiferous aether, which was later shown not to exist.
Mathematically, classical physics equations are those in which the Planck constant does not appear. According to the correspondence principle and Ehrenfest's theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects and the classical description will suffice. However, one of the most vigorous ongoing fields of research in physics is classical-quantum correspondence. This field of research is concerned with the discovery of how the laws of quantum physics give rise to classical physics found at the limit of the large scales of the classical level.
Computer modeling and manual calculation, modern and classic comparison
Today, a computer performs millions of arithmetic operations in seconds to solve a classical differential equation, while Newton (one of the fathers of the differential calculus) would take hours to solve the same equation by manual calculation, even if he were the discoverer of that particular equation.
Computer modeling is essential for quantum and relativistic physics. Classical physics is considered the limit of quantum mechanics for a large number of particles. On the other hand, classic mechanics is derived from relativistic mechanics. For example, in many formulations from special relativity, a correction factor (v/c)2 appears, where v is the velocity of the object and c is the speed of light. For velocities much smaller than that of light, one can neglect the terms with c2 and higher that appear. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities. Computer modeling has to be as real as possible. Classical physics would introduce an error as in the superfluidity case. In order to produce reliable models of the world, one can not use classical physics. It is true that quantum theories consume time and computer resources, and the equations of classical physics could be resorted to in order to provide a quick solution, but such a solution would lack reliability.
Computer modeling would use only the energy criteria to determine which theory to use: relativity or quantum theory, when attempting to describe the behavior of an object. A physicist would use a classical model to provide an approximation before more exacting models are applied and those calculations proceed.
In a computer model, there is no need to use the speed of the object if classical physics is excluded. Low-energy objects would be handled by quantum theory and high-energy objects by relativity theory.
| Physical sciences | Physics basics: General | Physics |
151183 | https://en.wikipedia.org/wiki/Turpentine | Turpentine | Turpentine (which is also called spirit of turpentine, oil of turpentine, terebenthine, terebenthene, terebinthine and, colloquially, turps) is a fluid obtained by the distillation of resin harvested from living trees, mainly pines. Principally used as a specialized solvent, it is also a source of material for organic syntheses.
Turpentine is composed of terpenes, primarily the monoterpenes alpha- and beta-pinene, with lesser amounts of carene, camphene, limonene, and terpinolene.
Substitutes include white spirit or other petroleum distillates – although the constituent chemicals are very different.
Etymology
The word turpentine derives (via French and Latin) from the Greek word τερεβινθίνη terebinthine, in turn the feminine form (to conform to the feminine gender of the Greek word, which means 'resin') of an adjective (τερεβίνθινος) derived from the Greek noun (τερέβινθος) for the terebinth tree.
Although the word originally referred to the resinous exudate of terebinth trees (e.g. Chios turpentine, Cyprus turpentine, and Persian turpentine), it now refers to that of coniferous trees, namely crude turpentine (e.g. Venice turpentine is the oleoresin of larch), or the volatile oil part thereof, namely oil (spirit) of turpentine; the latter usage is much more common today.
Source trees
Important pines for turpentine production include: maritime pine (Pinus pinaster), Aleppo pine (Pinus halepensis), Masson's pine (Pinus massoniana), Sumatran pine (Pinus merkusii), longleaf pine (Pinus palustris), loblolly pine (Pinus taeda), slash pine (Pinus elliottii), and ponderosa pine (Pinus ponderosa).
Converting crude turpentine to oil of turpentine
Crude turpentine collected from the trees may be evaporated by steam distillation in a copper still. Molten rosin remains in the still bottoms after turpentine has been distilled out. Such turpentine is called gum turpentine. The term gum turpentine may also refer to crude turpentine, which may cause some confusion.
Turpentine may alternatively be extracted from destructive distillation of pine wood, such as shredded pine stumps, roots, and slash, using the light end of the heavy naphtha fraction (boiling between ) from a crude oil refinery. Such turpentine is called wood turpentine. Multi-stage counter-current extraction is commonly used so fresh naphtha first contacts wood leached in previous stages and naphtha laden with turpentine from previous stages contacts fresh wood before vacuum distillation to recover naphtha from the turpentine. Leached wood is steamed for additional naphtha recovery prior to burning for energy recovery.
Sulfate turpentine
When producing chemical wood pulp from pines or other coniferous trees, sulfate turpentine may be condensed from the gas generated in Kraft process pulp digesters. The average yield of crude sulfate turpentine is 5–10 kg/t pulp. Unless burned at the mill for energy production, sulfate turpentine may require additional treatment measures to remove traces of sulfur compounds.
Industrial and other end uses
Solvent
As a solvent, turpentine is used for thinning oil-based paints, for producing varnishes, and as a raw material for the chemical industry. Its use as a solvent in industrialized nations has largely been replaced by the much cheaper turpentine substitutes obtained from petroleum such as white spirit. A solution of turpentine and beeswax or carnauba wax has long been used as a furniture wax.
Lighting
Spirits of turpentine, called camphine, was burned in lamps with glass chimneys in the 1830s through the 1860s. Turpentine blended with grain alcohol was known as burning fluid. Both were used as domestic lamp fuels, gradually replacing whale oil, until kerosene, gas lighting and electric lights began to predominate.
Source of organic compounds
Turpentine is also used as a source of raw materials in the synthesis of fragrant chemical compounds. Commercially used camphor, linalool, alpha-terpineol, and geraniol are all usually produced from alpha-pinene and beta-pinene, which are two of the chief chemical components of turpentine. These pinenes are separated and purified by distillation. The mixture of diterpenes and triterpenes that is left as residue after turpentine distillation is sold as rosin.
Niche uses
Turpentine is also added to many cleaning and sanitary products due to its antiseptic properties and its "clean scent".
In early 19th-century America, spirits of turpentine (camphine) was burned in lamps as a cheap alternative to whale oil. It produced a bright light but had a strong odour. Camphine and burning fluid (a mix of alcohol and turpentine) served as the dominant lamp fuels replacing whale oil until the advent of kerosene, electric lights and gas lighting.
Honda motorcycles, first manufactured in 1946, ran on a blend of gasoline and turpentine, due to the scarcity of gasoline in Japan following World War II. The French Emeraude rocket uses a similar fuel mixture. Turpentine has also been researched as a potential biofuel for mixing into gasoline.
In his book If Only They Could Talk, veterinarian and author James Herriot describes the use of the reaction of turpentine with resublimed iodine to "drive the iodine into the tissue", or perhaps just impress the watching customer with a spectacular treatment (a dense cloud of purple smoke).
Safety and health considerations
Turpentine is highly flammable, so much so that it has been considered as an automotive fuel.
Turpentine was added extensively into gin during the Gin Craze.
Turpentine's vapour can irritate the skin and eyes, damage the lungs and respiratory system, as well as the central nervous system when inhaled, and cause damage to the renal system when ingested, among other things. Ingestion can cause burning sensations, abdominal pain, nausea, vomiting, confusion, convulsions, diarrhea, tachycardia, unconsciousness, respiratory failure, and chemical pneumonia.
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for turpentine exposure in the workplace as 100 ppm (560 mg/m3) over an 8-hour workday. The same threshold was adopted by the National Institute for Occupational Safety and Health (NIOSH) as the recommended exposure limit (REL). At levels of 800 ppm (4480 mg/m3), turpentine is immediately dangerous to life and health.
Folk medicine
Turpentine and petroleum distillates such as coal oil and kerosene, were used in folk medicine for abrasions and wounds, as a treatment for lice, and when mixed with animal fat, as a chest rub or inhaler for nasal and throat ailments. Vicks chest rubs still contain turpentine in their formulations, although not as an active ingredient.
Turpentine, now understood to be dangerous for consumption, was a common medicine among seamen during the Age of Discovery. It was one of several products carried aboard Ferdinand Magellan's fleet during the first circumnavigation of the globe. Taken internally it was used as a treatment for intestinal parasites. This is dangerous, due to the chemical's toxicity.
Turpentine enemas, a very harsh purgative, had formerly been used for stubborn constipation or impaction. They were also given punitively to political dissenters in post-independence Argentina.
| Physical sciences | Terpenes and terpenoids | Chemistry |
151487 | https://en.wikipedia.org/wiki/Ulexite | Ulexite | Ulexite () sometimes called TV rock or TV stone due to its unusual optical properties, is a hydrous borate hydroxide of sodium and calcium with the chemical formula . The mineral occurs as silky white rounded crystalline masses or in parallel fibers. Ulexite was named for the German chemist Georg Ludwig Ulex (1811–1883), who first discovered it.
The natural fibers of ulexite act as optical fibers, transmitting light along their long axes by internal reflection. When a piece of ulexite is cut with flat polished faces perpendicular to the orientation of the fibers, a good-quality specimen will display an image of whatever surface is adjacent to its other side. The fiber-optic effect is the result of the polarization of light into slow and fast rays within each fiber, the internal reflection of the slow ray and the refraction of the fast ray into the slow ray of an adjacent fiber. An interesting consequence is the generation of three cones, two of which are polarized, when a laser beam obliquely illuminates the fibers. These cones can be seen when viewing a light source through the mineral.
Ulexite is found in evaporite deposits and the precipitated ulexite commonly forms a "cotton ball" tuft of acicular crystals. Ulexite is frequently found associated with colemanite, borax, meyerhofferite, hydroboracite, probertite, glauberite, trona, mirabilite, calcite, gypsum and halite. It is found principally in California and Nevada, US; Tarapacá Region in Chile, and Kazakhstan. Ulexite is also found in a vein-like bedding habit composed of closely packed fibrous crystals.
History
Ulexite has been recognized as a valid mineral since 1840, after George Ludwig Ulex, for whom the mineral was named, provided the first chemical analysis of the mineral. In a footnote on p. 51, the editor claimed that Ulex's mineral actually was the same mineral that the American chemist Augustus Allen Hayes had found in Chile in 1844:
In 1857, Henry How, a professor at King's College in Windsor, Nova Scotia discovered borate minerals in the gypsum deposits of the Lower Carboniferous evaporate deposits in the Atlantic Provinces of Canada where he noted the presence of a fibrous borate that he termed natro-boro-calcite, which was actually ulexite (Papezik and Fong, 1975).
Murdoch examined the crystallography of ulexite in 1940. The crystallography was reworked in 1959 by Clark and Christ and their study also provided the first powder x-ray diffraction analysis of ulexite. In 1963 ulexite's remarkable fiber optics qualities were explained by Weichel-Moore and Potter. Their study highlighted the existence in nature of mineral structures exhibiting technologically required characteristics. Lastly, Clark and Appleman described the structure of ulexite correctly in 1964.
Chemistry
Ulexite is a borate mineral because its formula (NaCaB5O6(OH)6·5H2O) contains boron and oxygen. The isolated borate polyanion [B5O6(OH)6]3− has five boron atoms, therefore placing ulexite in the pentaborate group.
Ulexite is a structurally complex mineral, with a basic structure containing chains of sodium, water and hydroxide octahedra. The chains are linked together by calcium, water, hydroxide and oxygen polyhedra and massive boron units. The boron units have a formula of [B5O6(OH)6]3– and a charge of −3. They are composed of three borate tetrahedra and two borate triangular groups.
Ulexite decomposes/dissolves in hot water.
Morphology
Ulexite commonly forms small, rounded masses resembling cotton balls. Crystals are rare but will form fibrous, elongated crystals either oriented parallel or radial to each other. Crystals may also be acicular, resembling needles (Anthony et al., 2005). The point group of ulexite is 1, which means that the crystals show very little symmetry as there are no rotational axes or mirror planes. Ulexite is greatly elongated along [001]. The most common twinning plane is (010). Ulexite collected from the Flat Bay gypsum quarry in Newfoundland exhibits acicular "cotton balls" of crystals with a nearly square cross-section formed by the equal development of two pinacoids. The crystals are about 1–3 μm thick and 50–80 μm long, arranged in loosely packed, randomly oriented overlapping bundles (Papezik and Fong, 1975). In general, the crystals have six to eight faces with three to six terminal faces (Murdoch, 1940).
Optical properties
In 1956, John Marmon observed that fibrous aggregates of ulexite project an image of an object on the opposite surface of the mineral. This optical property is common for synthetic fibers, but not in minerals, giving ulexite the nickname "TV rock". According to Baur et al. (1957), this optical property is due to the reflections along twinned fibers, the most prominent twinning plane being on (010). The light is internally reflected over and over within each of the fibers that are surrounded by a medium of a lower refractive index (Garlick, 1991). This optical effect is also the result of the large spaces formed by the sodium octahedral chains in the mineral structure. Synthetic fibers used for fiber optics transmit images along a bundle of threadlike crystals the same way naturally occurring ulexite reproduces images due to the existence of different indices of refractions between fibers. Additionally, if the object is colored, all of the colors are reproduced by ulexite. Parallel surfaces of ulexite cut perpendicular to the fibers produce the best image, as distortion in the size of the projected image will occur if the surface is not parallel to the mineral. Curiously, in situ samples of ulexite are capable of producing a decent, rough image. Satin spar gypsum also exhibits this optical effect; however, the fibers are too coarse to transmit a decent image. The thickness of the fibers is proportional to the sharpness of the projected image.
Ulexite also displays concentric circles of light if held up to a bright light source, a strange optical property first observed by G. Donald Garlick (1991). This effect can also be produced by shining a laser pointer at a slightly oblique angle through a piece of ulexite. This optical behavior is a consequence of the different refractive indices of ulexite in different directions of polarization. Microscopic analysis of ulexite also yields cones of light that clearly emerge from each grain that is thicker than 0.1 mm under the Bertrand lens.
Ulexite is colorless and nonpleochroic in thin sections with low relief. Being triclinic, ulexite is optically biaxial. Interference figures yield addition on the concave side of the isogyres, causing ulexite to be biaxial positive. Ulexite has a high 2V that ranges between 73° – 78° and a maximum birefringence of up to 0.0300 (Anthony et al., 2005). According to Weichel-Moore and Potter (1963), the orientation of the fibers around the c-axis is completely random based on the variations in extinctions viewed under cross polarization. Ulexite displays polysynthetic twinning parallel to the elongation, along {010} and {100} (Murdoch, 1940). In thin sections cut parallel to the fibers, ulexite grains display both length-fast and length-slow orientations in equal quantities because the intermediate axis (y) of the indicatrix is roughly parallel to the elongation of the fibers along the crystallographic c-axis (Weichel-Moore and Potter, 1963).
Structure
Ulexite crystals contain three structural groups, isolated pentaborate polyanions, calcium coordinated polyhedra, and sodium coordinated octahedra that are joined together and cross-linked by hydrogen bonding. The Ca-coordination polyhedra share edges to form chains which are separate from the Na-coordination octahedral chains. There are 16 distinct hydrogen bonds that have an average distance of 2.84 Å. Boron is coordinated to four oxygens in a tetrahedra arrangement and also to three oxygens in a triangular arrangement with average distances of 1.48 and 1.37 Å, respectively. Each Ca2+ cation is surrounded by a polyhedron of eight oxygen atoms. The average distance between calcium and oxygen is 2.48 Å. Each Na+ is coordinated by an octahedron of two hydroxyl oxygens and four water molecules, with an average distance of 2.42 Å (Clark and Appleman 1964). The octahedral and polyhedral chains parallel to c, the elongate direction, cause the fibrous habit of ulexite and the fiber optical properties.
Significance
Boron is a trace element within the lithosphere that has an average concentration of 10 ppm, although large areas of the world are boron deficient. Boron is never found in the elemental state in nature, however boron naturally occurs in over 150 minerals. The three most important minerals from a worldwide commercial standpoint based on abundance are tincal (also known as borax), ulexite, and colemanite (Ekmekyaper et al., 2008). High concentrations of economically significant boron minerals generally occur in arid areas that have a history of volcanism. Ulexite is mined predominantly from the Borax mine in Boron, California.
The boron concentration of ulexite is commercially significant because boron compounds are used in producing materials for many branches of industry. Boron is primarily used in the manufacturing of fiberglass along with heat-resistant borosilicate glasses such as traditional PYREX, car headlights, and laboratory glassware. Borosilicate glass is desirable because adding B2O3 lowers the expansion coefficient, therefore increasing the thermal shock resistance of the glass. Boron and its compounds are also common ingredients in soaps, detergents, and bleaches, which contributes to the softening of hard water by attracting calcium ions. Boron usage in alloy and metal production has been increasing because of its excellent metal oxide solubilizing ability. Boron compounds are used as a reinforcing agent in order to harden metals for use in military tanks and armor. Boron is used extensively for fire retardant materials. Boron is an essential element for plant growth and is frequently used as a fertilizer, however in large concentrations boron can be toxic, and therefore boron is a common ingredient in herbicides and insecticides. Boron is also found in chemicals used to treat wood and as protective coatings and pottery glazes. Additionally when ulexite is dissolved in a solution of carbonate, calcium carbonate forms as a by-product. This by-product is used in large amounts by the pulp and paper industry as a paper filler and as a coating for paper that allows for improved printability (Demirkiran and Kunkul, 2011).
Recently, as more attention is being given to obtaining new sources of energy, the use of hydrogen as a fuel for cars has come to the forefront. The compound sodium borohydride (NaBH4) is currently being considered as an excellent hydrogen storage medium due to its high theoretical hydrogen yield by weight for future use in cars. Piskin (2009) validates that the boron concentration in ulexite can be used as the boron source or the starting material in the synthesis of sodium borohydride (NaBH4).
Related minerals
Borate minerals are rare because their main component, boron, makes up less than 10 ppm (10 mg/kg) of Earth's crust. Because boron is a trace element, the majority of borate minerals occur only in one specific geologic environment: geologically active intermontane basins. Borates are formed when boron bearing solutions, caused from the leaching of pyroclastic rocks, flow into isolated basins where evaporation then takes place. Over time, borates deposit and form into stratified layers. Ulexite occurs in salt playas and dry saline lakes in association with large-scale gypsum deposits and Na-Ca borates. There are no known polymorphs of ulexite nor does ulexite form a solid solution series with any other minerals.
According to Stamatakis et al. (2009) Na, Ca, and Na-Ca borates are found in relation to ulexite. These minerals are:
Borax Na2B4O7·10H2O
Colemanite Ca2B8O11·5H2O
Howlite Ca2B5SiO9[OH]5
Kernite Na2[B4O6(OH)2·3H2O]
Meyerhofferite Ca2B6O6(OH)10·2H2O
Probertite NaCaB5O9·5H2O
More common minerals that are not borates, but also form in evaporite deposits are:
Calcite CaCO3
Gypsum CaSO4·2H2O
Halite NaCl
| Physical sciences | Minerals | Earth science |
31329141 | https://en.wikipedia.org/wiki/Fungi%20imperfecti | Fungi imperfecti | The fungi imperfecti or imperfect fungi are fungi which do not fit into the commonly established taxonomic classifications of fungi that are based on biological species concepts or morphological characteristics of sexual structures because their sexual form of reproduction has never been observed. They are known as imperfect fungi because only their asexual and vegetative phases are known. They have asexual form of reproduction, meaning that these fungi produce their spores asexually, in the process called sporogenesis.
There are about 25,000 species that have been classified in the deuteromycota and many are basidiomycota or ascomycota anamorphs. Fungi producing the antibiotic penicillin and those that cause athlete's foot and yeast infections are algal fungi. In addition, there are a number of edible imperfect fungi, including the ones that provide the distinctive characteristics of Roquefort and Camembert cheese.
Other, more informal names besides Deuteromycota ("Deuteromycetes") and fungi imperfecti are anamorphic fungi, or mitosporic fungi, but these are terms without taxonomic rank. Examples are Alternaria, Colletotrichum, Trichoderma etc.
Problems in taxonomic classification
Although Fungi imperfecti/Deuteromycota is no longer formally accepted as a taxon, many of the fungi it included have yet to find a place in modern fungal classification. This is because most fungi are classified based on characteristics of the fruiting bodies and spores produced during sexual reproduction, and members of the Deuteromycota have been observed to reproduce only asexually or produce no spores.
Mycologists formerly used a unique dual system of nomenclature in classifying fungi, which was permitted by Article 59 of the International Code of Botanical Nomenclature (the rules governing the naming of plants and fungi). However, the system of dual nomenclature for fungi was abolished in the 2011 update of the Code.
Under the former system, a name for an asexually reproducing fungus was considered a form taxon. For example, the ubiquitous and industrially important mold, Aspergillus niger, has no known sexual cycle. Thus Aspergillus niger was considered a form taxon. In contrast, isolates of its close relative, Aspergillus nidulans, revealed it to be the anamorphic stage of a teleomorph (the ascocarp or fruiting body of the sexual reproductive stage of a fungus), which was already named Emericella nidulans. When such a teleomorphic stage became known, that name would take priority over the name of an anamorph (which lacks a sexual reproductive stage). Hence the formerly classified Aspergillus species would be properly called Emericella nidulans.
Phylogeny and taxonomy
Phylogenetic classification of asexually reproducing fungi now commonly uses molecular systematics. Phylogenetic trees constructed from comparative analyses of DNA sequences, such as rRNA, or multigene phylogenies may be used to infer relationships between asexually reproducing fungi and their sexually reproducing counterparts. With these methods, many asexually reproducing fungi have now been placed in the tree of life. However, because phylogenetic methods require sufficient quantities of biological materials (spores or fresh specimens) that are from pure (i.e., uncontaminated) fungal cultures, for many asexual species their exact relationship with other fungal species has yet to be determined. Under the current system of fungal nomenclature, teleomorph names cannot be applied to fungi that lack sexual structures. Classifying and naming asexually reproducing fungi is the subject of ongoing debate in the mycological community.
Historical classification of the imperfect fungi
These groups are no longer formally accepted because they do not adhere to the principle of monophyly. The taxon names are sometimes used informally. In particular, the term 'hyphomycetes' is often used to refer to molds, and the term 'coelomycetes' is used to refer to many asexually reproducing plant pathogens that form discrete fruiting bodies.
Following, a classification of the Fungi imperfecti: Saccardo et al.(1882-1972)
Class Hyphomycetes lacking fruiting bodies
Order Moniliales (producing spores on simple conidiophores)
Order Stilbellales (producing spores on synnemata)
Order Tuberculariales (producing spores in sporodochia)
Class Coelomycetes spores produced in fruiting bodies
Order Melanconiales (producing spores in acervuli)
Order Sphaeropsidales (producing spores in pycnidia)
Class Agonomycetes lacking spores
Other, according to Dörfelt (1989):
Form-Klasse: Hyphomycetes
Form-Ordnung: Agonomycetales
Form-Familie: Agonomycetaceae
Form-Ordnung: Moniliales
Form-Familie: Moniliaceae
Form-Familie: Dematiaceae
Form-Familie: Stilbellaceae
Form-Familie: Tuberculariaceae
Form-Klasse: Coelomycetes
Form-Ordnung: Melanconiales
Form-Familie: Melanconiaceae
Form-Ordnung: Sphaeropsidales
Form-Familie: Sphaeropsidaceae
Other systems of classification are reviewed by .
Common species
Industrially relevant fungi
Tolypocladium inflatum → from which the immunosuppressant ciclosporin is obtained;
Penicillium griseofulvum
Penicillium roqueforti
Penicillium camemberti
Other species of Penicillium are used to improve both the taste and the texture of cheeses
Aspergillus oryzae
Aspergillus sojae
Aspergillus niger
Amorphotheca resinae
Lecanicillium sp. → these produce conidia which may control certain species of insect pests
Other entomopathogenic fungi, including Metarhizium and Beauveria spp.
Pochonia spp. are under development for control of Nematode pests.
| Biology and health sciences | Basics | Plants |
1991528 | https://en.wikipedia.org/wiki/Vlasov%20equation | Vlasov equation | In plasma physics, the Vlasov equation is a differential equation describing time evolution of the distribution function of collisionless plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph. The Vlasov equation, combined with Landau kinetic equation describe collisional plasma.
Difficulties of the standard kinetic approach
First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction. He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics:
Theory of pair collisions disagrees with the discovery by Rayleigh, Irving Langmuir and Lewi Tonks of natural vibrations in electron plasma.
Theory of pair collisions is formally not applicable to Coulomb interaction due to the divergence of the kinetic terms.
Theory of pair collisions cannot explain experiments by Harrison Merrill and Harold Webb on anomalous electron scattering in gaseous plasma.
Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates:
explicitly a PDE:
and adapted it to the case of a plasma, leading to the systems of equations shown below. Here is a general distribution function of particles with momentum at coordinates and given time . Note that the term is the force acting on the particle.
The Vlasov–Maxwell system of equations (Gaussian units)
Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions and for electrons and (positive) plasma ions. The distribution function for species describes the number of particles of the species having approximately the momentum near the position at time . Instead of the Boltzmann equation, the following system of equations was proposed for description of charged components of plasma (electrons and positive ions):
Here is the elementary charge (), is the speed of light, is the mass of the ion, and represent collective self-consistent electromagnetic field created in the point at time moment by all plasma particles. The essential difference of this system of equations from equations for particles in an external electromagnetic field is that the self-consistent electromagnetic field depends in a complex way on the distribution functions of electrons and ions and .
The Vlasov–Poisson equation
The Vlasov–Poisson equations are an approximation of the Vlasov–Maxwell equations in the non-relativistic zero-magnetic field limit:
and Poisson's equation for self-consistent electric field:
Here is the particle's electric charge, is the particle's mass, is the self-consistent electric field, the self-consistent electric potential, is the electric charge density, and is the electric permitivity.
Vlasov–Poisson equations are used to describe various phenomena in plasma, in particular Landau damping and the distributions in a double layer plasma, where they are necessarily strongly non-Maxwellian, and therefore inaccessible to fluid models.
Moment equations
In fluid descriptions of plasmas (see plasma modeling and magnetohydrodynamics (MHD)) one does not consider the velocity distribution. This is achieved by replacing with plasma moments such as number density , flow velocity and pressure . They are named plasma moments because the -th moment of can be found by integrating over velocity. These variables are only functions of position and time, which means that some information is lost. In multifluid theory, the different particle species are treated as different fluids with different pressures, densities and flow velocities. The equations governing the plasma moments are called the moment or fluid equations.
Below the two most used moment equations are presented (in SI units). Deriving the moment equations from the Vlasov equation requires no assumptions about the distribution function.
Continuity equation
The continuity equation describes how the density changes with time. It can be found by integration of the Vlasov equation over the entire velocity space.
After some calculations, one ends up with
The number density , and the momentum density , are zeroth and first order moments:
Momentum equation
The rate of change of momentum of a particle is given by the Lorentz equation:
By using this equation and the Vlasov Equation, the momentum equation for each fluid becomes
where is the pressure tensor. The material derivative is
The pressure tensor is defined as the particle mass times the covariance matrix of the velocity:
The frozen-in approximation
As for ideal MHD, the plasma can be considered as tied to the magnetic field lines when certain conditions are fulfilled. One often says that the magnetic field lines are frozen into the plasma. The frozen-in conditions can be derived from Vlasov equation.
We introduce the scales , , and for time, distance and speed respectively. They represent magnitudes of the different parameters which give large changes in . By large we mean that
We then write
Vlasov equation can now be written
So far no approximations have been done. To be able to proceed we set , where is the gyro frequency and is the gyroradius. By dividing by , we get
If and , the two first terms will be much less than since and due to the definitions of , , and above. Since the last term is of the order of , we can neglect the two first terms and write
This equation can be decomposed into a field aligned and a perpendicular part:
The next step is to write , where
It will soon be clear why this is done. With this substitution, we get
If the parallel electric field is small,
This equation means that the distribution is gyrotropic. The mean velocity of a gyrotropic distribution is zero. Hence, is identical with the mean velocity, , and we have
To summarize, the gyro period and the gyro radius must be much smaller than the typical times and lengths which give large changes in the distribution function. The gyro radius is often estimated by replacing with the thermal velocity or the Alfvén velocity. In the latter case is often called the inertial length. The frozen-in conditions must be evaluated for each particle species separately. Because electrons have much smaller gyro period and gyro radius than ions, the frozen-in conditions will more often be satisfied.
| Physical sciences | States of matter | Physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.